Category Archives: Technology

Everything new about technology

New Book – Google Cloud Platform Administration

Google cloud platform administration

I am delighted to share with you my new book on Google Cloud Platform Administration. Over the past year, I have been working and exploring GCP with customers and clients.

The book introduces you to the Google Cloud platform and will help you gain an understanding of the different components of the GCP offering. The book covers concepts such as Compute engine, App engine and also Kubernetes engine.

Below is a list of chapters in the book.


The book helps you gain hands-on experience using the GCP portal, cloud shell and the GCP SDK.

I am looking forward for any and all feedback so feel free to reach out to me @rjapproves.

You can purchase your copy from the below links





Welcome to a new style of blogging, called the High-level overview (HLO) series. In these blogs, I will describe the problem, which usually is something I came across recently, followed by a high-level solution overview of how it could be solved. The goal is to get you to dig deeper into individual components that make this high-level solution possible.

Very recently, I had a call from one of our architects’ who was tasked with comparing different clouds and present a solution to his customer. While the customer was going forward with one solution, they were interested in finding out how a solution would be built out in AWS. The goal here was to have a replication environment for customer’s on-premise SQL server so it can be failed over to. Moreover, the customer wanted to be able to stream the data out of the SQL environment into an elastic MapReduce cluster for data analytics purposes. The customer was also concerned about storing large amounts of data into an archive and be able to retrieve it when needed. Needless to say, all the connectivity needed to be secure.

In summary –

Customer Objectives

OBJ.001 – Need to have functional disaster recovery environments for their data

OBJ.002 – Need to have an effective way to do data processing and modeling while keeping costs low

OBJ.003 – Need to have an archival methodology to fulfill long-term storage requirements.

OBJ.004 – Ensure data in transit is secure.

Functional Requirements

FR.001 – A busy SQL server needed replication, backup and archival to ensure availability, disaster recovery, and long-term storage.

FR.002 – Data from SQL server needs to be pushed into elastic MapReduce (EMR) for data processing and modeling.

FR.003 – Data from EMR needs to be archived for long-term storage purposes.

FR.004 – Secure connectivity to all services over which data would traverse from customer on-prem to a cloud provider.

Below is a high-level illustration of the current deployment as I understood.


The Options –

Given the customer objectives and functional requirements, AWS provides multiple products that help us define a solution to satisfy the customer’s use case. Let’s look at these products individually.

1. Amazon RDS – Amazon’s relational database service (RDS) provides scalable DBaaS offering with the ability to migrate and replicate SQL databases.

2. Amazon S3 – Amazon’s object storage offering, the Amazon S3 (Simple storage service) allows you to store large volumes of data with virtually unlimited capacity.

3. Amazon Glacier – AWS glacier is Amazon’s archival storage offering that allows you to archive PB scale data for extremely cheap prices. AWS glacier can also automatically archive data stored on Amazon S3 using life-cycle policies.

4. AWS Kinesis – Amazon Kinesis allows you to collect, process and analyze real-time data streams. Data can be analyzed using AWS Kinesis data analytics which allows the use of standard SQL queries. You can push this kinesis data stream to a stream processing framework like AWS Elastic MapReduce which support Apache Hadoop, Spark, and other big-data frameworks.

5. AWS Lamda – AWS Lamda is Amazon’s serverless technology that allows you to build purpose full functions and calls.

6. AWS SNS – AWS Simple Notification Service (SNS) allows you to trigger notifications or even AWS lambda functions based on pre-defined events.

7. AWS VPN Gateway – Allows you to create secure connections between sites.

8. AWS Storage Gateway – Allows you to deploy a virtual machine instance with different storage options on-premise. This virtual machine replicates all data stored on AWS S3 bucket.

9. AWS Snowball – AWS’s solution to migrate large amounts of data using cold migration techniques

10. AWS DirectConnect – Cost effective private network solution from on-premise to AWS datacenter for migrating large data. The solution can also be used to push network traffic on local networks rather than the internet.

Let’s connect the dots, slowly.


With AWS VPN Gateway, a customer can connect their on-prem environments securely to the AWS regions. This is crucial and fulfills customer’s FR.004 which requires all data to be secure in transit. It is important to remember that there is a limitation of 5 VPN Gateways per region. This limit can be increased by reaching out to AWS Support. Alternatively, there is an option for the customer to use AWS Directconnect that may be a better option in this scenario provided the customer’s data center is close to an AWS Partner Network provider (APN Technology Partner). AWS DirectConnect offers consistent high bandwidths (10GB) and a private connection into your VPC network on AWS. This means traffic that does not traverse the internet. DirectConnect is also ideal for real-time streaming data and can be used to seamlessly extend the customer’s network into AWS.


The customer currently has large sets of data that need to be migrated to AWS. While using DirectConnect is an option that allows for high bandwidth transfers, it can get very expensive. Amazon offers AWS Snowball to help transfer cold data into the AWS cloud. The process is simple. Once you put in a request for a snowball device, AWS sends you a secure device to your data center which can be connected to your environment. You can then copy all your data to this snowball device. Once done, you ship the device back to AWS. All data on the device is encrypted and is secure. AWS also offers Snowball edge that offers more compute within the device allowing you to access your data using a local EC2 instance. AWS Snowball has a limit of 50TB to 80TB while the edge device has a limit of 100TB.

For PB scale data migration, AWS offers SnowMobile. This is an 18-wheeler truck with a Container as a Datacenter. The container is transported to your data center and needs to be connected to power and network. Once done, you can copy PB scale data to this environment before Amazon picks it back up.

For a simpler way to transfer data, AWS offers storage gateways. A storage gateway is a light-weight virtual machine that can be deployed in your environment and configured with your AWS account. The gateway uses a local disk and exposes it as an iSCSI drive that is accessible by other virtual machines. Any data stored on this drive is then replicated to your AWS S3 storage account. Storage gateways’ are can be configured for hot, cold and cached data so you have a variety of options depending on your use case. Download of this storage gateway appliance is free of cost and so is the deployment so it has a low “barrier to entry” and can ideally be used for file transfers.

Storage and Archival

AWS’s S3 (Simple Storage Service) is an object data store that offers unlimited storage for files. S3 storage, like any object storage, is accessible over HTTP and HTTPS and can store data securely on AWS’s datacenters. The data is locally replicated but can also be replicated (Cross-region replication) across regions to increase availability. You can even serve these files directly from S3 into your application. An interesting concept of S3 is its ability to have life-cycle policies on your files which are stored in “S3 buckets”. You can set a life-cycle policy to archive all files after a set amount of time and S3 can move them over to AWS glacier – Amazon’s low-cost long-term archival solution.


AWS’s Simple notification service (SNS) can be configured to alert the customer based on custom or pre-set triggers. You will find SNS being used in almost everything in AWS. For instance, when you create a new AWS account, the first thing to do is to create an SNS billing alert to ensure that you don’t exceed billing thresholds. SNS can also trigger or get triggered by other AWS services such as Lamda functions (Serverless).


AWS Lamda is Amazon’s serverless technology which allows you to run objective-based focus functions based on events or triggers. You can trigger a lamda function to perform a certain task. For example, I can have an SNS service to ensure that billing does not exceed $100 per day. If it does exceed, I can have an event trigger sent to a lamda function that will immediately shut down by instances to save on billing costs.

Data analysis

AWS Kinesis is a solution for real-time data stream analysis. Real-time data can be collected and analyzed using AWS Kinesis data analytics – this can be helpful for this customer because the solution allows using regular SQL queries to analyze data. This data can also be pushed to a stream processing framework such as AWS Elastic MapReduce for big-data analysis before being archived.


AWS RDS (Relational Database Service) offers a managed database environment which can be readily consumed. You simply deploy database instances and pick a database flavor. Flavors such as Oracle and SQL are supported and can be deployed. This fulfills the customer’s use case where there was a need to migrate SQL database to a remote instance for disaster recovery purposes. AWS RDS allows SQL replication with changed data to be replicated from your primary database. You can even have read-only RDS instances and perform Disaster recovery tests to fulfill your Business continuity plan (BCP).

Putting it all together


I encourage you to read more about the different solutions discussed in this blog post. Feel free to comment.

Some important links

AWS Networking

AWS Migration


SDDC Era …

A post that went online on Monday.

Been preoccupied lately with loads of work and SDDC stuff but time to get back to sharing and caring!

The Software-Defined Data Center Era is Here

VMworld 2016, Book giveaway and New Updates!


I have been very very busy with work and strategic leadership sessions which have kept me away from contributing to the community so let me begin with a huge sorry!

So VMworld 2016 – Be_Tomorrow is on and going and its really great to see VMware focus on IT transformation and hybrid cloud strategy to enable multi cloud “seamless” consumption – something few vendors out there offer and none is better than VMware themselves.

I am sure you heard about the new vRealize Automation 7.x release and the upcoming NSX features of multi cloud and hybrid cloud environments. This means you can use NSX to manage networks in Amazon and Azure! What you can also do is create “ONE” security policy that spans across all the clouds and automatically applies it! This means no going to each cloud platform to modify your policy – apply it in NSX and see it get enforced in AWS, GCP, vCloud Air, Azure and OpenStack! This is huge – unless you think security is not important – in which case you will be in trouble soon.

Horizon Suite has some major updates as well! Below is a grab from this blog site.

  • Further enhancements to the protocol
  • Improvements in the GPU-encode/decode that significantly lower bandwidth and latency
  • Improvements in the JPG/PNG codec to reduce bandwidth utilization by 6x
  • vRealize Operations integration with Blast Extreme.  I can now see Blast statistics in the vROPs console
  • UEM Smart Policies Integration with Blast.  I can now use the same PCoIP smart policies to control the Blast protocol.  This enhancement also allows administrators to set per-device policies so I can set different policies for Windows, Mac, Android, and IOS.
  • A Raspberry Pi client

vRealize automation features are awesome – The new 7.x version includes

  • Improved Views – You can now see the IP Address of the provisioned VM in items view and change the All Services Icon!
  • Improved Migration and Upgrades – vRA 6 to 7.1
  • Out of the Box Active Directory Integration for provisioning
  • Out of the box IPAM integration – Infoblox
  • Portlets for external web content – Great for posting how to videos or company wide messages.
  • Scale out/in -Support for elastic applications

And still in Beta and super exciting

  • Native Container Support
  • Azure Endpoint – Provision to Microsoft Azure  (YAYYYYYYYYYYYY!!!!!!!)
  • vRA Service Now integration – Use Service Now portal for vRA (I FAINTED!!!)

Pretty awesome right? Well there are more upgrades and improvements in the entire VMware portfolio so will blog about it soon.

And before I forget – we had a very successful book signing session at the Rackspace booth! Stop by and have a chat with me or other awesome folks.


Azure vs AWS IAAS/Networking Comparison

This is a good picture of how AWS and Azure IAAS/networking looks like – the side by side product comparison is really helpful when you are looking at both the products and pulling your hair out.

This again is a high level but begins to draw a picture in your head and helps connect dots.



You can read more @ MSDN – 

99% of the World’s Data…


Yes, we will have a quick chat about 99% of the world’s data. Did you know that 99% of the World’s data was created in the last three years! There is no denying that we are much more connected than we ever were and this data is only going to skyrocket once we start consuming IOT devices and applications. (Internet of things).

Scientists estimate that the total data in the world is about 350+ exabytes – Thats 350 Billion Gigabytes and growing at 23% yearly – and this is just an estimate. So should business leaders invest in big data? Absolutely! According to a Forbes research, 89% of business leaders agree that big data is the next industry disrupter and will revolutionize how decisions are made. Gartner forecasted that 2016  is a digital era where big data is going to be a big disrupter.

So are you just collecting data or are you going to do something with it? Let me know in the comments below..

Cloud Management Platform – A Strategic Approach


I have been talking to a lot of enterprise companies where the CTO’s, CIO’s and architects are trying to break into the “single pane of glass” service management strategy. Their reasons are fair and simple – the single pane of glass allows them seamless view, access and management capabilities for their entire IT foot print across multiple platforms and multiple regions.

What I hear ..

Most of these conversations have just about two key players who drive this discussion. The CIO’s/CTO’s really care about an easier way to broker IT services to internal business units. Cloud Services brokerage is the new hotness in enterprises and they are going all out to ensure they phase out the old, manual and antiquated processes and techniques and replace them with the new cloud savvy applications.

The architects and the operations folks on the other hand are all about how amazing the whole concept of a single pane of glass is and how much time, effort and man power it saves these teams not to forget the visibility that it gives to the IT staff in allowing them to respond faster to any events. (Advanced features in a cloud management platform include event management, monitoring and even disaster recovery!)

What adds fuel to fire is that Gartner and 451 research have clearly indicated that a majority of enterprise companies are transitioning to a cloud services brokerage model where achieving an effective ITSM model is easier than ever before.

Enterprises are stuck at either talking about how great Cloud management platform (single pane of glass) sounds or have deployed it but are not really using it to its fullest potential.

Easier said than done

There is a lot more strategy and planning that needs to go into effectively designing and deploying a cloud management platform to enable a reliable and an effective service brokerage framework.

The highlighted layers of the above picture are some of the key design aspects that need to be thoroughly thought of before deploying a management platform. You always have the option to grow your platform to add any feature at a later stage but you run the risk of reworking your foundation and your architecture which can have minimal to significant impact on your end users.

I will list out a few features that enterprises need to think about.

Read More …


Redoing my home lab is always fun, but its a lot more fun when NSX comes into the mix. It all started with this customer who wanted NSX so bad.. well here I am trying to find out why?

Let me be honest, I am NOT a networking guru. It has almost always been one giving me headaches but NSX feels so refreshing and easy enough to wrap my head around it.

So heres a quick refresher for two most important components – I don’t talk about all the components here, only a few for now but more on their way.

1. Segment IDS – Segment IDs, as Wahl rightly puts it, are like VLANs for you VXLAN. Imagine having multiple NSX Management servers talking to a single vcenter, their traffic will be separated by the segment ids – is one use case. Now for each VXLAN virtual wire you get one segment ID assigned to it. So how many segment IDS are allowed? 16 billion! Yes 16 billion of them. So that we don’t get confused with the physical VLAN ids – the segment ids start with 5000. Now I created 5 Segment IDS from 5000 to 5005. Also remember this is a system wide setting!


2. Transport Zone – Transport zones are basically Network scopes in VCNS, if you recall what they are. Let me explain, when you create a transport zone you add a cluster to it. This basically defines the scope of that VXLAN virtual wire. If you have 5 clusters that need to be able to access the same VXLAN virtual wires then they need to be part of the same Transport Zone.

3. Logical Switches – Logical switches are virtual wires – basically VXLANs. When you create a logical switch you assign it to a transport zone. This allows all clusters that belong to the same transport zone to be now configured/exposed to that logical switch. This allows VM’s in a cluster to be able to talk to each other on this VXLAN logical wire without having to create a physical vlan. Remember once a logical switch is created you cannot change its transport zone. You will have to remove it and recreate it to change its transport zone.


More to come keep you posted 🙂



While on wfhacation (Work From Home + Vacation)  I noticed that the vExpert 2015 list was out, apart from the vSphere 6.0 announcement. I was awarded the vExpert 2015 title, my second one, which is awesome.

If you have been involved in the community and kept an eye out for the who’s who, then this list of candidates should not come as a surprise.

Here is the full list of candidates who have been awarded the title.

This is inspiring and allows me and others to continue to contribute and give back. In 2015 I look forward to expanding my focus beyond VMware but never loosing focus on VMware. Lets see where the ride takes me, I just need to find the seat belt and strap myself in. 🙂

Needless to say – RJ Approves this message!!

vExpert 2014 AND DELIGHTED!

I wanted to share some good, I meant, Great news.

I have been awarded the prestigious title of vExpert and am now a member of the club.

Thank you VMware for the recognition.

You can find the list of vExperts here – here.

My vExpert Profile is here.