Friday, 18 July 2014

Azure Machine Learning–K Means Clustering…


 

 

imageimage

Machine Learning (ML) has been around almost over 5 decades now. In the last couple of years with the cloud computing and big data been the dominant colours in the IT Industry, ML has found a unique space in the Big Data problem.

A Brief on Azure ML

Azure ML is Machine Learning is simpler Microsoft offering from quick and easy ML advent. Its definitely a good starting point to get to use to the Machine Learning. As one may start using this more often will realize the Azure ML is limiting in terms of choices of Algorithms , data manipulation operations & ability to run as part to run with bigger scheme of things.

Most folks will start with Azure ML and figure out that there are multiple places where the constructs are limiting, so as a good citizen MSFT went and added the ExecuteR where one could program on R Studio for test and development, eventually for larger dataset, code port or intelligent copy to ExecuteR. A good video on how to use ExecuteR in Azure ML can be found here http://channel9.msdn.com/Blogs/Windows-Azure/R-in-Azure-ML-Studio

K-Means Clustering in Azure ML Video-

The data analysis starts of with initial task of having to classify data.  There are various algorithm which one may employ to classify data. The single most simplest and widely used algorithm is the K-Means Clustering. This session talks about K-Means Clustering and how to do the same using Azure ML.

The session take away from Azure ML is great not without R.

 

Files

Presentation Shared Here-https://drive.google.com/file/d/0B5lmX16jC3ZEcEprM2F0aW9FOVU/edit?usp=sharing

DataSets & Demo Here

RScripts - https://drive.google.com/file/d/0B5lmX16jC3ZES1c5MEdiamFCN0E/edit?usp=sharing

DataSets- https://drive.google.com/file/d/0B5lmX16jC3ZEY2R0T3R3WDU3NzQ/edit?usp=sharing

 

 

Tuesday, 13 May 2014

Azure API Management



image


The Apiphany buy out of Microsoft sometime around Mid Oct 2013 is made it into the Azure Stack as API Management in a very short while.
Here is complete  You Tube - video on Azure API Management.

API is no more an after thought for most architectures, its right there in front staring right into architects face. API Facade can make or break any architectures from improving integration, developer productivity to high business returns. There are many platforms in the market for API’s. 
Azure API Management is direct port of APIPhany into Azure. In this post we will explore the Azure API Management and towards the end a comparison with the leading API platforms.
The Azure API has managed to pull through a lot of features into Azure in very short period of time
API Management Administrators Portal
The starting point for any API Management platform is a good enough Administrator Portal.  The Administrator Portal is essentially the place where one can manage the API.  Below is the snapshot of the same.
image
The Administrator Portal for API Management covers the following  at a high level.
  • Dashboard – Quick Snapshot view of the all the API’s and there health graphs, products, applications
  • API –  Managing API’s there associated operations, settings etc..
  • Products- This essentially is a container of API’s and the access management around the same.
  • Policies – This is a key feature for API’s.  The consumer’s of API are initially developers and in production its going to be the applications. There many generic constructs which are required by the API for example
    • Quota Management: Number of times a consumer can call the API’s
    • Format Changes: One may need to change parameter/output value before invoking/return value from the actual API or probably change the format ex json,xml
    • Allow cross domain calls
    • CORS
    • Data Changes – Replace certain values inbound/outbound.
    • Restrict caller IP’s
    • Store to Cache
There are additional constructs which can be applied. 
Note: Most good API platforms have ability to add workflow constructs to inbound and outbound flow of the API’s. For every call made to the api , one would want to call other services based on the data. As of current there are no workflow constructs in Azure API , one can expect the same.
  •  Analytics:  The core objective for any API platform is the health management of the API, Analytics includes the health management information, graphs etc..
image
  • Developers: Considering a decent population of the API consumers are going to be developers, the management around the same is important. This is fairly basic as of current.
  • Security: Support for 3rd party identity for developer sign –up.
  • Developer Portal: This is content management piece of the API portal which includes the API’s, documentation around the same, the products and issue reporting.
In a nut shell the Azure API Management is much awaited piece in the Azure stack and off to a flying start.

Thursday, 17 April 2014

Azure RMS (Rights Management Service)


 

With growing need data protection and compliance, Data security is always a key concern for any organization. Documents carry sensitive data. The data availability on multiple devices is critical to the business, and equally important is securing this data. Data at rest and transit both need to secure.

Almost a decade ago MSFT released it Rights Management Service for Windows 2003, later came AD RMS with 2008, 2012. In the past 1 year we have seen Azure take up the brigade of RMS , assuming there is Azure Active Directory.

With buzz around HIPAA compliance and Azure, still no clear directive on whether Azure is fully compliant

What most customers want is to implement a robust security solution to protect the organization’s data available in file servers, desktops via rights protecting the data, and make this data available via secure access. The data security and data-access control will help in enabling sharing of sensitive data in a secure and HIPAA compliant manner, and ensure confidentiality, integrity and authorized access to its data.

Microsoft Azure cloud service based Rights Management (RMS is the answer. This solution concept uses Azure Rights Management Service (RMS) as the basic building block for securing data and documents / files. The devices used for data access may or may not be connected to the organization network. (Devices are assumed to be organization locked and HIPAA compliant).

image

Azure RMS will serve as a basic building block for encryption / decryption of documents / files with-in the client’s organization. Windows Server 2012 File Classification Infrastructure (FCI) will assist in automating the process of applying rights to documents. Most of the files to be accessed within the organization will be stored on the Windows File Server which has File Classification Infrastructure Service installed.

Azure RMS protection comes with 3 basic type content protection

  • Native RMS Microsoft formats– Office files
  • Non MSFT formats- Native PDF, Image files (bmp, jpeg…) txt. These will work only with P-Viewer tool for consumption and FoxIT for PDF.  One can write additional plugins if required.
  • Container level protections-  P-file container which imposed encryption/ decryption at a container level. The important point to note is once the files is out the container its unencrypted and has no rights management and is not secured. Container can have any files types.

Windows Server File classification Infrastructure (FCI) feature will identify sensitive files and encrypt them with RMS. FCI crawls file shares for files meeting certain criteria and tag them based on the results. Tags will be stored in the file attributes and persist even after moving files to another NTFS storage. Once files are tagged, they will be automatically applicable for "RMS Encryption" based on certain tags with RMS templates.

The RMS templates will be defined are organization-wide. With FCI one can perform different actions on files you identify as sensitive. One of them is to use the in-box RMS protection capability and there can be custom tasks for supporting other types of files trough two options:

· Putting files in encrypted container using Rights Protected Folder Explorer (RPFe)

· Triggering specific RMS protectors for certain types of files, such as PDF, CAD or images, supported with partner solutions

RPEe as a better option for protecting other file types. File Classification Infrastructure (FCI) provides insight into the data by automating classification processes. Rights Protected Folder Explorer (RPFe) is a Windows based application that allows you to protect files and folders.

A Rights Protected Folder is similar to a file folder in that it contains files and folders. However, a Rights Protected Folder controls access to the files that it contains, no matter where the Rights Protected Folder is located. By using Rights Protected Folder Explorer, one can securely store or send files to authorized users and control which users will be able to access those files while they are in the Rights Protected Folder.

Have implemented Azure RMS for more than 2 customers. So far so good.

Thursday, 3 April 2014

Global Azure boot camp Mumbai 29th March 2014


 

GWAB

Speaking at Global Azure Boot Camp 29/3 @ Mumbai has been quite a delight. Giving back to community plays a vital role in fuelling growth.

Covered topics around Cloud Computing  The Next Paradigm , Big Data & Large-scale Implementation on Azure. Find the presentation on the links below.

Cloud computing, Windows Azure has gained a general acceptance among the Microsoft and non-Microsoft participants. The drive around Open Source is seemingly building up around Azure.  The delighter to the sessions were the eagerness to know more about Big Data or HDInsight. Hadoop continues to be a bit of crystal ball gazing for most participant, a promising one. The twinkle in eyes of most to know more around the stack can/will be done in the coming session.

Most business user and software folks get asked the questions around the biggest implementation on Azure Platform. This is more of indicator to endorse there thought process.

Would like to make a special mention of Ajay Khankojhe from Synergetics the event was conducted wonderfully.

Presentations

1. Large-scale Implementation on Windows Azure

2. Cloud Computing – The Next Paradigm

3. Big Data Basics

Wednesday, 6 November 2013

Azure SDK 2.2 Features & Migration


 

Brief Synopsis

The SDK 2.2 is not major upgrade it has brought more features around remote debugging in cloud which was a big ask till now & Windows Azure Management Libraries from a Developer per say. The Windows Azure Service Bus partition queue and topics multiple message broker will help in better availability, as each queue or topic is assigned to one message broker which is single point of failure, now with the new feature of multiple message broker assigned to a queue or topic. Please read the Q&A at the end to know the nuances with the same & the approach to move to Azure SDK 2.2 from generic project stand point of view

High Level following are the new features

  • Visual Studio 2013 Support
  • Integrated Windows Azure Sign-In support within Visual Studio
  • Remote Debugging Cloud Services with Visual Studio – Very Relevant to  Developers
  • Firewall Management support within Visual Studio for SQL Databases
  • Visual Studio 2013 RTM VM Images for MSDN Subscribers
  • Windows Azure Management Libraries for .NET – Very Relevant to  Deployment Team
  • Updated Windows Azure PowerShell Cmdlets and ScriptCenter
  • Topology Blast – Relevant to Deployment Team
  • Windows Azure Service Bus – partition queues and topics across multiple message brokers – Relevant to  Developers. All Service Bus based projects have to move ASAP.

Below covered are only the highlighted areas.

Remote Debugging Cloud Resources within Visual Studio

Today’s Windows Azure SDK 2.2 release adds support for remote debugging many types of Windows Azure resources. With live, remote debugging support from within Visual Studio, you are now able to have more visibility than ever before into how your code is operating live in Windows Azure.  Let’s walkthrough how to enable remote debugging for a Cloud Service:

Remote Debugging of Cloud Services

Note: To debug the web or worker role should be on Azure SDK 2.2

To enable remote debugging for your cloud service, select Debug as the Build Configuration on the Common Settings tab of your Cloud Service’s publish dialog wizard:

clip_image001

Then click the Advanced Settings tab and check the Enable Remote Debugging for all roles checkbox:

clip_image002

Once your cloud service is published and running live in the cloud, simply set a breakpoint in your local source code:

clip_image003

Then use Visual Studio’s Server Explorer to select the Cloud Service instance deployed in the cloud, and then use the Attach Debugger context menu on the role or to a specific VM instance of it:

clip_image004

Once the debugger attaches to the Cloud Service, and a breakpoint is hit, you’ll be able to use the rich debugging capabilities of Visual Studio to debug the cloud instance remotely, in real-time, and see exactly how your app is running in the cloud.

clip_image005

Today’s remote debugging support is super powerful, and makes it much easier to develop and test applications for the cloud.  Support for remote debugging Cloud Services is available as of today, and we’ll also enable support for remote debugging Web Sites shortly.

Windows Azure Management Libraries for .NET (Preview)- Automating PowerShell

Windows Azure Management Libraries are in Preview!

What do Azure Management Libraries provide, the control of creation, deployment and tear down resources which previously has been at a PowerShell level now will be available in the code.

Having the ability to automate the creation, deployment, and tear down of resources is a key requirement for applications running in the cloud.  It also helps immensely when running dev/test scenarios and coded UI tests against pre-production environments.

These new libraries make it easy to automate tasks using any .NET language (e.g. C#, VB, F#, etc).  Previously this automation capability was only available through the Windows Azure PowerShell Cmdlets or to developers who were willing to write their own wrappers for the Windows Azure Service Management REST API.

Modern .NET Developer Experience

We’ve worked to design easy-to-understand .NET APIs that still map well to the underlying REST endpoints, making sure to use and expose the modern .NET functionality that developers expect today:

  • Portable Class Library (PCL) support targeting applications built for any .NET Platform (no platform restriction)
  • Shipped as a set of focused NuGet packages with minimal dependencies to simplify versioning
  • Support async/await task based asynchrony (with easy sync overloads)
  • Shared infrastructure for common error handling, tracing, configuration, HTTP pipeline manipulation, etc.
  • Factored for easy testability and mocking
  • Built on top of popular libraries like HttpClient and Json.NET

Below is a list of a few of the management client classes that are shipping with today’s initial preview release:

.NET Class Name

Supports Operations for these Assets (and potentially more)

ManagementClient

Locations
Credentials
Subscriptions
Certificates

ComputeManagementClient

Hosted Services

Deployments

Virtual Machines

Virtual Machine Images & Disks

StorageManagementClient

Storage Accounts

WebSiteManagementClient

Web Sites

Web Site Publish Profiles

Usage Metrics

Repositories

VirtualNetworkManagementClient

Networks

Gateways

Automating Creating a Virtual Machine using .NET

Let’s walkthrough an example of how we can use the new Windows Azure Management Libraries for .NET to fully automate creating a Virtual Machine. I’m deliberately showing a scenario with a lot of custom options configured – including VHD image gallery enumeration, attaching data drives, network endpoints + firewall rules setup - to show off the full power and richness of what the new library provides.

We’ll begin with some code that demonstrates how to enumerate through the built-in Windows images within the standard Windows Azure VM Gallery.  We’ll search for the first VM image that has the word “Windows” in it and use that as our base image to build the VM from.  We’ll then create a cloud service container in the West US region to host it within:

clip_image006

We can then customize some options on it such as setting up a computer name, admin username/password, and hostname.  We’ll also open up a remote desktop (RDP) endpoint through its security firewall:

clip_image007

We’ll then specify the VHD host and data drives that we want to mount on the Virtual Machine, and specify the size of the VM we want to run it in:

clip_image008

Once everything has been set up the call to create the virtual machine is executed asynchronously

clip_image009

In a few minutes we’ll then have a completely deployed VM running on Windows Azure with all of the settings (hard drives, VM size, machine name, username/password, network endpoints + firewall settings) fully configured and ready for us to use:

clip_image010

TopologyBlast

This new functionality will allow Windows Azure to communicate topology changes to all instances of a service at one time instead of walking upgrade domains. This feature is exposed via the topologyChangeDiscovery setting in the Service Definition (.csdef) file and the Simultaneous* events and classes in the Service Runtime library.

Windows Azure Service Bus – partition queues and topics across multiple message brokers

Service Bus employs multiple message brokers to process and store messages. Each queue or topic is assigned to one message broker. This mapping has the following drawbacks:

· The message throughput of a queue or topic is limited to the messaging load a single message broker can handle.

· If a message broker becomes temporarily unavailable or overloaded, all entities that are assigned to that message broker are unavailable or experience low throughput.

Q&A

Q. Can I use Azure SDK 2.2 to debug Web Role , Worker Role using earlier SDK.

A. No, you need to have your roles migrated to SDK 2.2. For older role you can only get the diagnostic information out of Visual Studio if installed 2.2.

clip_image011

Q. What are typical issues while migrating from 1.8 to 2.2?

Worker Roles and Web Role Recycling

I have 3 worker roles and a web role in my project and I upgraded it to the new 2.2 SDK (required in VS2013). Ever since the upgrade, all of the worker roles are failing and they instantly recycle as soon as they're started.

Post can be found here- http://stackoverflow.com/questions/19717215/upgrade-to-azure-2-2-sdk-is-causing-roles-to-fail

Not able to update the Role after upgrading

I recently worked on an issue where the following error was being thrown while deploying the upgraded role to Windows Azure.  You just upgraded the SDK to 2.1 or 2.2 and you start getting the following error while deploying the role.

Link to the post http://blogs.msdn.com/b/cie/archive/2013/10/31/not-able-to-upload-role-after-upgrading-the-sdk.aspx

Q. Steps to Migrate to Azure SDK 2.2

A. Open the Azure project in Visual Studio 2012,

  1. For upgrading your project is via the Properties windows of the Cloud Project. You will see the following screenshot.
  2. clip_image013
  3. Follow through the upgrade process fix the errors,
  4. Run the Project locally to see if any errors fix the same.
  5. Check In the code post all fixes
  6. Test the same in Dev. environment to see if this breaking. There is potential chance of breaking due to dependencies.

Note: The web & worker role tend go into inconsistent state due library dependency mismatch. This will have to fixed

Generic Migration to Azure SDK 2.2- High Level Approach

The suggested approach is to start with one component, one web role and one worker role WCF Rest and see the impact in terms of issues and then decide then timelines for others. The POC will be done in 1 Sprint, the candidate are the following

  • Component – Reusable Component
  • Web Role – Portal Web
  • Worker Role – Portal Worker

 

Links

· Installation of Azure SDK 2.2 - http://www.windowsazure.com/en-us/downloads/archive-net-downloads/

Wednesday, 2 October 2013

Cloud–Azure Downtime --A Reality and It Hurts….


Cloud is a technological innovations which has been accepted in the main stream . The cloud platform is constantly evolving, still in its infancy still. The platform will take sometime to get matured. The non functional the ilaties is something which needs a lot more data, understanding and most of them come a very one sided T&C with fine print “Conditions Apply”.

Cloud downtime is a tricky topic and needs a lot more understanding. Its only when we run into a real situation do we start reading the fine print.

Microsoft Azure Compute ran into hot waters 27 Sep 2013 6:34 AM UTC North Central US datacenters. The documentation mentions Partial Performance Degradations & “The repair steps have been successfully executed and validated. Full Compute functionality has been restored in the affected North Central US sub-region. We apologize for any inconvenience this has caused our customers.”.

What does the Cloud Services SLA read as on the Microsoft site

Cloud Services, Virtual Machines and Virtual Network

  • For Cloud Services, we guarantee that when you deploy two or more role instances in different fault and upgrade domains, your Internet facing roles will have external connectivity at least 99.95% of the time.
  • For all Internet facing Virtual Machines that have two or more instances deployed in the same Availability Set, we guarantee you will have external connectivity at least 99.95% of the time.
  • For Virtual Network, we guarantee a 99.9% Virtual Network Gateway availability

What is the SLA around Fault Domain?

We all tend to think the Fault Domain will just be God sent, but in all actuality, Fault Domain is a set of computer on some rack and same data center is also liable to go down.  If the entire data center goes down there is no fail over.  Moreover even if the down time is unplanned and over the documented limits there is no region wise fail over.

<Correction> Fault Domain don't exists across data center as pointed out by Wood  - Thanks</Correction>

For more documentation on Fault Domain refer here.

Can one find which are fault domain instances for their instances?

No apparently MSFT doesn't indicate where the fault domain for the instances ,at least on the management portal. However Windows Azure SDK provides some properties you can use to query fault domain and upgrade domain information. RoleInstance class has property called FaultDomain that you can read to find out in which fault domain your role instance is running. There is a catch though – querying FaultDomain property will return either 1 (one) or 2 (two). This is because you are entitled for only 2 fault domains for your application. If your application is deployed across more fault domains you will not be able to determine this using the FaultDomain property.

Can one really rely on the Fault Domain?

My take is given the limited documentation and Fault Domain is more of abstract term which features in the SLA and T&C . The recent increase in compute & sql reporting downtime

  • Sept 27 – 2013 – North Central US , 4:36 AM, 5:15 AM, 6:34 AM.
  • Sept 28-2013 -Compute : Partial Performance Degradation [North Central US]
    • 28 Sep 2013 5:15 PM UTC
    • 28 Sep 2013 5:00 PM UTC
    • 28 Sep 2013 4:20 PM UTC
    • 28 Sep 2013 3:20 PM UTC
  • Sept 29 -Compute, Storage and SQL Reporting : Performance Degradation [Southeast Asia]
    • 29 Sep 2013 4:44 PM UTC
    • 29 Sep 2013 4:07 PM UTC
    • 29 Sep 2013 2:07 PM UTC
    • 29 Sep 2013 1:07 PM UTC

The string of episodes around Compute - did impact a lot of customers in multiple region and most of them I presume had fault domain (min 2 instances), still this resulted in disruption of service. My take if downtime can result in business loss then don’t depend on fault domains look for alternatives.

What alternatives does the customer have in case they don’t want to rely on Fault Domains?

Azure does provide Traffic Manager – which is CTP , this can be used please evaluate a fully working POC before using this as an option.

Traffic Manager allows you to load balance incoming traffic across multiple hosted Windows Azure services whether they’re running in the same datacenter or across different datacenters around the world. By effectively managing traffic, you can ensure high performance, availability and resiliency of your applications. Traffic Manager provides you a choice of three load balancing methods: performance, failover, or round robin.

Use Traffic Manager to:

Ensure high availability for your applications

Traffic Manager enables you to improve the availability of your critical applications by monitoring your hosted services in Windows Azure and providing automatic failover capabilities when a service goes down.

Run responsive applications

Windows Azure allows you to run services in datacenters located around the globe. By serving end-users with the hosted service that is closest to them in terms of network latency, Traffic Manager can improve the responsiveness of your applications and content delivery times.

Note: The customer pays for the extra set of instances running in a different data centre. The data synchronization methods have to build over and above Azure Sync. The deployment from the customer development environment has to deploy to both production and fail over simultaneously. The customer will entail additional costs.

What are dates around Traffic Manager GA?

No dates are announced in public from MSFT, The assumption is early March 2014.

What does the SLA for Compute mention?

Find the Compute SLA here. The important part of the document SLA Exclusions

a. SLA Exclusions

i. This SLA and any applicable Service Levels do not apply to any performance or availability issues:

1. Due to factors outside Microsoft’s reasonable control (for example, a network or device failure at the Customer site or between the Customer and our data center).<Important> Does a Natural Disaster classify beyond reasonable control </important>

2. That resulted from Customer’s or third party hardware or software. This includes VPN devices that have not been tested and found to be compatible by Microsoft. The list of compatible VPN devices is available at http://msdn.microsoft.com/en-us/library/windowsazure/jj156075.aspx.

3. When Customer uses versions of operating systems in either Virtual Machines or Cloud Services that have not been tested and found to be compatible by Microsoft. The Virtual Machines list of compatible Microsoft software and Windows versions is available at http://support.microsoft.com/kb/2721672. The Virtual Machines list of compatible Linux software and versions is available at http://support.microsoft.com/kb/2805216. The Cloud Services list of compatible operating systems is available at http://msdn.microsoft.com/en-us/library/ee924680.aspx.

4. That resulted from actions or inactions of Customer or third parties;

5. Caused by Customer’s use of the Service after Microsoft advised Customer to modify its use of the Service, if Customer did not modify its use as advised;

6. During Previews (e.g., technical previews, betas, as determined by Microsoft);

Or

7. Attributable to the acts or omissions of Customer or Customer’s employees, agents, contractors, or vendors, or anyone gaining access to Microsoft’s Service by means of Customer’s passwords or equipment.

This is lot of legal jargons but My 2 cents is if your business cannot afford a downtime please read the SLA’s and considering cloud is evolving, we can only get better on the SLA’s. The customer will have to look for options other than MSFT.

Additional Notes

How to find out the Fault Domain? -http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleinstance.faultdomain.aspx

Microsoft Azure Real-time Service Status http://www.windowsazure.com/en-us/support/service-dashboard/

Wednesday, 4 September 2013

Intelligence As A Service–Codename Zephyr


Intelligence As A Service
How often have you searched for a product on the internet haven't got a decent comparative analysis and pushed you closer to a purchase. How often have responded to a job advertisement and didn't know the complete analysis of the
company, the role, probable salary and the generic responses from HR folks what does it indicate. More importantly whom should you connect in the industry to better understand the opportunity which you are applying. Big Data comes into the
mainstream into everything. The next big thing I am working on an intelligent analyser for anything. Tagged as open source Codename Zephyr. More to follow https://github.com/ajayso/zephyr

Sunday, 14 July 2013

Big Data–Back to the Basics


 

The Big Data Landscape is for ever changing. While studying what’s there in market and how do I really get a handle at understanding Big Data I come to find that every 2 weeks there is a new name in the landscape. Also the flip side is there names which would just vanish in a short time. It would be good to get a basic understanding of Big Data.  In this post I don't necessarily talk about a specific technology I’m just trying to get the understanding right. Big Data if you see is classified around 3 areas

  • Batch
  • Interactive
  • Real-time Tools.

One really has a challenge in terms of how to understand Big Data, what is base classification of the Big Data space. This is what the landscape looks like as of early Jan 2013 this year, this has changed.

image

Broadly this gives an idea of all the different things that are in the big data landscape. The only way one can tend to understand the Big Data space is the following high level concepts.

Batch Processing

The data provided needs to processed. Large amounts of data needs to be processed fairly quickly. Typically most seen in the world of Hadoop or HDInsight on Windows Azure. Essentially what does it entail.

  • It is has data spread over n number disk on each of the nodes.Has distributed cluster to process that data.
  • Volume of the data is generally TB to PB.
  • The primary programming model used is MapReduce , essentially what we are doing is Mapping out operations to each of the machine and post that there is aggregation using the reduce function.

The MapReduce function have to be typically written by a developer. The original project was started by Google Map/Reduce.Currently 2 open source projects – Hadoop & Spark. Spark provides primitives for in-memory cluster computing: your job can load data into memory and query it repeatedly much more quickly than with disk-based systems like Hadoop MapReduce.

Interactive Analysis

The large set of data needs to analysed interactively. There are 2 technologies generally been employed her is column based databases which has sequentially indexed data. It has capability of doing table scans pretty quickly. The second is to have as much data in the memory cache. This is pure play interactive platform which are in a position to analyse large sets of data at very low latency. One such good example is  Palantir’s Project Horizon majorly building interactive analysis of big data for US government. Below is a cool video which explains it more depth. What seems be important in these kinds of implementation

  • Data is never duplicated.
    • Compact in memory representation – The in memory data representation needs to real small on the memory footprint unto 16GB. Compression must be lightweight -  Dictionary & prefix based compression , localized block based scheme are effective.
  • Analyse functionality here is not business specific “its more like analyse any kind of data”
  • Partitioning of processing:  Shared nothing/Sharded architecture .
  • Partition Id for Objects would be a good idea, but sending half billion Partition Id from the client to the serve will not be a good idea. Another way is to have has Partition object ID’s in subset using the same hash, use the hash for query.
  • Other option of interactive analysis is Drill, Shark, Impala & Hbase. Originally started with Google Dremel.

<Video on Palantir’s Interactive Analysis Platform – Project Horizon>

Stream Processing

Hadoop’s batch-oriented processing is sufficient for many use cases, especially where the frequency of data reporting doesn’t need to be up-to-the-minute. However, batch processing isn’t always adequate, particularly when serving online needs such as mobile and web clients, or markets with real-time changing conditions such as finance and advertising.”

The real-time use case is an obvious one. If you need to respond or be warned in real-time or near real-time, for example, security breaches or a service impacting event on a VoIP or video call, the high initial latency of batch oriented data stores such as Hadoop is not sufficient.

Moreover the data is not valuable without analysis. In a typical real-time scenario data is fed from multiple sources and analyse this data on fly is seen as a business requirement in multiple industry Segments.

Streaming Big Data analytics needs to address two areas. First, the obvious use case, monitoring across all input data streams for business exceptions in real-time. This is a given. But perhaps more importantly, much of the data held in Big Data repositories is of little or no business value, and will never end up in a management report. Sensor networks, IP telecommunications networks, even data center log file processing – all examples where a vast amount of ‘business as usual’ data is generated. It’s therefore important to understand what’s being stored, and only persist what’s important (which admittedly, in some cases, may be everything). For many applications, streaming data can be filtered and aggregated prior to storing, significantly reducing the Big Data burden, and significantly enhancing the business value of the stored data.

Stream Processing was started by a project called Storm called Twitter the need out there was to able store the data like images, feeds and second was to aggregation very quickly.

Storm, Apache & Kafka are some of the Stream Processing Platform.

Quick Comparison Sheet

image

The No SQL Paradigm

The Relational Database built by Codd came into existence in the 70’s majored by IBM with a multiple query language variants. Post that one we saw.

In 80’s we saw a lot of applications been built , the need for faster speed was important. Ingres.They came up with the idea of have to save some part of the big database in another small area what they termed as Index which improved performance.

Then come along the web which kind change the dynamics on the database which is were the concept of No Sql came from. No Sql means Not Only SQL.

The focus on any relational database design is on “how can we store” without duplicating the same. There is very little focus given on how do we use this. Given that in the current times the focus is more “How do I Use this data” obliviously with the best performance and scalability at center of the conversation. NoSQL ideally is focused on “How Do I Use This Data” for example is the data going to use for job queue, shopping cart, cms or multiple other usage scenarios. Below is JSON of a shopping cart item, which consist of id, user id, line items all in one place and that’s how it actually gets used.

{
id : 3,
user_id : 25,
line_items: [
{ sku : '123', price: 1000,
name : 'Nunemaker Autograph'},
{ sku : '124', price: 1000,
name : 'Banker Autograph'},
],
shipping_address: {
street : '123 Some St.',
city : 'South Bend',
state : 'IN',
zip : '11216'
},
subtotal : 2000,
tax : 140,
total : 2140
}

Its best to keep all the data which kind of related in one place because that’s the way its going to be used. In the relational world the emphasis is around how are we going to store it and separate it VS in the No SQL world its kept altogether and usage becomes far simplified.

NoSQL is about data OR Pro Data and its about How you Use the Data.The debate of SQL Vs. NoSQL is never around Scalability, but when get to a point of Scaling its far more easier to scale in the NoSQL world.

No SQL is analogous to OLTP

In coming section I will concentrate on some of important Interactive Analysis & Streaming platform.

HBaseInteractive Platform

HBase is used by Facebook. All the messages that one sends to Facebook are actually done through HBase. Its all about high volume super fast INSERTS. Its also good at volatile READS. HBase was designed to a highly transactional system owing to the fact the data is pretty much in the memory.

HBase is an in memory , column store database. Apparently this has to be understood properly especially the column store is not the same as RDBMS column. A very efficient INSERT or writes engine. Also the definition of database for HBase is different.

image

  HBase relies on Hadoop for its persistent storage , that kind pretty explain the rest of the figure.

Zookeeper is a distributed coordinating service this keeps tracks all the HRegion Server and make sure whatever is written into the memory os also written in the HDFS. By definition the way HDFS works is that if you lose a node you already 3 replica’s.

Cassandra - Interactive Platform

Cassandra is very similar to HBase in terms of functionality. Its got a SQL like query language they call it the CQL. Both HBase and Cassandra are very good at writes and super fast and both are in memory.

Facebook initially started out of Cassandra that moved on to HBase. The real reason of moving to HBase is not very clear.

Drill

Apache Incubation Project inspired by Dremel. Designed to scale to 10k servers , query PB’s in minutes. Traditional PetaByte Map Reduce will take hours today , Drill apparently takes minutes and some cases seconds. This is an Open Source reimplementation of Dremel.

A little background on Dremel

Some Google Terminologies which are associated with Drill- Big Data & Big Query- Big Data in the industry by definition is about 500 million rows and above.  Big Data has a limitation on size of the column at 64kb. 

Big Data at Google – What does it mean from a number per say

  • 60 hours of YouTube video uploaded per minute
  • 100 million gigabytes search Index / Analysis of 2010
  • 425 million Gmail users.

Looking at the size of the data a relational database is an obvious no, that leaves us with an option of doing a full table scan, which can be expensive in the relational world, that’s where Dremel was born.

Big Query is the externalization of this technology.

What does a Big Query really look like ex: Finding top installed markets apps?

SELECT  top(appId 20) AS app, count( * ) AS count FROM installing.2012

ORDER BY count DESC

Result in last than 20 seconds.

Where can we use Big Query?

  • Game and social media analytics
  • Infrastructure monitoring
  • Advertising campaign optimization
  • Sensor data analysis.

Apache Drill is Google Incubation project around Interactive Analysis of Large Datasets. Map Reduce is a batch mode tool and there is a latency associated with the same. There are cases where one would like to real-time data faster some of the scenario are

  • Adhoc –analysis with interactive tools
  • Real-time dashboards
  • Event/trend detection and analysis
    • Network intrusion
    • Fraud
    • Failures

The key point in Dremel is it uses Nested Data Model.

Apache Drill its system designed to support Nested Data.

  • Flat record are the simplest case of nested data that is root only
  • Support Schema(Protocol Buffers, Apache Avro) and Schema less (JSON, BSON) formats.

What is Nested Query Languages supported by Drill?

  • DRQL
    • SQL like query language for nested data
    • Compatible with Google BigQuery/Dremel
    • Designed to support efficient column based processing
  • Mongo Query Language
  • Other language/ programming models can plug.

image

 

The data model is a nested Data Model for Document split across Basic Document Data and URL entries. The query is very SQL like.

How does Data Flow work with Drill?

Data loaded into Hadoop Cluster by one of many mechanism i.e Hive, HDFS command line, Map/Reduce or NFS interface. The data in Hadoop is stored as row fashion. The Drill Load is responsible for converting the row based data into columnar manner.

Alternatively for the first time Row Based query is allowed which in turn helps in creating a columnar copy of the same.

image

 

What does the Query Execution of Drill look like at high level?

image

At a very high level the Query Execution involves the following step.

  • The Driver submits the Query(text) for the Parser. The Parser does a parsing builds the abstract syntax tree and hands it over to the Compiler.
  • The Compiler does the optimization and builds the execution plan
  • The Execution Engine is responsible for scheduling this Storage which can on any server

Typical Sql Query Components support? What are they

Query Components

  • SELECT
  • FROM
  • WHERE
  • GROUP BY
  • HAVING
  • (JOIN)

Key logical operators

  • Scan
  • Filter
  • Aggregate
  • (Join)

What is so unique about the SCAN logical operator?

 One of architecture goal for Drill has been to support multiple formats that is achieved by having a SCAN operator for each format. So for example the query consists of a Where which has JSON data the query would involve

SELECT Json(data URI)

Field and predicates are pushed down into the scan operator.

What’s actually involved in the Execution Engine?

Drill Engine Execution has 2 layers

image

  • Operator Layer which is serialization layer- This where individual records are processed, for example doing a count on a table there local table scan done followed by local aggregation and at the global aggregation a sum is done.
  • Execution Layer is not serialization aware, all it does is transfer blobs across nodes and cluster and this layer is responsible for communication between nodes, dependencies(what has to finish before) & fault tolerant.

A Complete Video on Drill can be found here

Introduction to Apache Drill

 

ImpalaInteractive Analysis

Impala kind of solve the same problem as Drill i.e moving beyond Map/Reduce , batch processing with low latency problems.  Very similar Columnar Storage and the complete works as Drill. So for the sake of simplicities will just get to the points of differentiations here.

However, it would be unfair of me to compare the two in detail in terms of maturity and functionality at the present time. As of October 29th, 2012, the Drill source code repository at [1] has code for a query parser and a plan parser which includes a reference plan evaluator which can perform scans against JSON-formatted data in flat files. Impala's tree at [2] includes a distributed query execution engine with support for cancellation, failure-detection, data modification via INSERT, integration with HDFS and HBase, JIT-compiled execution fragments via LLVM and a bunch of other stuff.

Impala is completely dependent on Hadoop. It utilizes Hive QL.Impala is progressing to becoming an MPP (Massively Parallel Processing) Architecture.

The query processing is similar to Drill.

image

The high drive from MSFT towards Impala is pretty evident as they want a good Interactive tool in this arena without that they would be doomed.

 

Storm- Stream Analysis

Storm is a free and open source distributed real-time computation system. Storm makes it easy to reliably process unbounded streams of data, doing for real-time processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use!

Storm has many use cases: real-time analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate.

Storm integrates with the queuing and database technologies you already use. A Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial.

Closing Notes……………

There is a lot happening in this space. The need for interactive analysis and stream processing is a must for any big data implementation.

Google has been pretty much the innovator of Map/Reduce way back in 2003, then Dremel  They are pretty much leading this space with open source world really quick to capitalize on the same and bringing them to market. On the contrary the Rest of the Google world (Microsoft, Oracle, IBM…) have a long way to sprint to keep up. The easiest way from them to accept an open source implementation and roll it out quickly example HDInsight.

Its very clear that MSFT is pushing Impala in the Interactive space alongside HDInsight.

KQFM5PRP7JD8

 

Sunday, 23 June 2013

IaaS Inherent Part of the PaaS Architecture


 

We wish for a perfect world, honestly this exist only in utopian terms and so does an Architect realize while architecting a solution for the Cloud PaaS there is not perfect architecture. There is an architecture which fits the bill for the moment based on the shortcomings.

Normally while migrating an on premise application on Cloud PaaS we run into multiple scenarios where we see PaaS is not a perfect fit and one soon starts to ponder what are the alternatives and easiest is “Have that application or component build and deployed on an IaaS” as this would be very similar what we already have on premise.

So IaaS is something which is a solution in the short term. With Windows Azure correcting there mistake and bringing Persistent IaaS in 2012 and providing better basic features like clustering and support early this year, IaaS does become an attractive choice.

So what does Persistent VM in Azure really have to offer?

  • Storage: Persistent Storage – Easily Add new Storage.
  • Deployment: Build the VHD in the cloud or build on premise and deploy,
  • Networking:Internal end points are open as default. Access control with firewall or guest OS. Input endpoints controlled through management portal , services and API.
  • Primary Use: Application that requires persistent storage easily run on Windows Azure.

What OS images Azure IaaS comes with?

  • Windows Server 2008 R2
  • Windows Server 2008 R2 with Sql Server 2012 Evaluation
  • Windows Server 2008 R2 with BizTalk Server 2012.
  • Windows Server 2012
  • Open SUSE 12.1
  • CentOS 6.2
  • Ubuntu 12.04
  • Suse Linux Enterprise Server SP2.

Which key Server Applications does Azure IaaS support?

  • Sql Server 2008 , 2008 R2, 2012 ---> Note Sql Azure comes with very strip version of Sql Server so in case one is planning on using anything beyond transactional , one has to look at Sql Server on IaaS ( SSAS, SSRS, SSIS…..)
  • SharePoint 2010 , 2013(assuming) –> Note: Given Sharepoint Online is strip down version of SharePoint one will have to look at SharePoint Server on IaaS for more functionalities.
  • BizTalk Server 2010 – The BizTalk PaaS is in its infancy very limited features EDI / EAI integrations. A complete reference can be found here.
  • Windows Server 2008 R2, 2012

* The biggest work load on the cloud for any enterprise application is Sql Server on IaaS. This list is going to grow over time. There is also customer support for the above list.

What is the difference between Virtual Machines and Cloud Services?

The Virtual Machines that one creates are implicitly on Cloud Services. The Cloud Services may appear to be segregated from VM but apparently they are not.

To explain things better. Lets take an example.

Let assume we have a cloud services with Web Role ( 3 instances) and Worker Role ( 3 Instances). The Cloud Services acts more like a container consisting of Web and Worker Role,

  • its like a management container when one deletes/update the cloud services it deletes all the entities in it.
  • Its also a security boundary i.e roles in the same cloud service can interact with one another which cannot be done across cloud services unless they explicitly allow it.
  • Its a network boundary, each of the roles are visible to each other on the network.

image

 

When creates a Virtual Machine (which are roles with exactly one instance) they are in an implicit cloud service.

image

When one creates a VM it appears in the VM section of management portal and not under the cloud services. The implicit cloud service is the dns name which is been assigned to the virtual machine. So for example if one has created the first virtual machine with the name mymachinedemo and creates the second virtual machine with the same name mymachinedemo and chooses to create the virtual machine to connect with an existing virtual machine it will give a list of existing virtual machines. So essentially the cloud services act as a container for the virtual machines.

image

When one creates multiple virtual machine via the option of “connect to an existing virtual machine” what it does it places the new virtual machine under the same cloud service and then the dns name will start showing in the list of cloud services.

image

The hiding of the cloud service only happens in the portal

Images and Disks, What are these?

Images are base images provided by the create from gallery functionality where one has a bunch of pre-existing images of Operating System, Post creation of the Images you get is an OS disk which your specific operating system disk and associated with these are data disk. By the way the disk are writable disks for Virtual Machines. The VM sizes supported by MSFT currently and subject to change, Additionally you have 28 & 56 GB RAM sizes as well.

image

The Data disk can go up to 1TB in size. One can attach multiple data disk with one VM.

Images and Disk are stored as Windows Azure Storage Blobs, Data is triplicated i.e 3 copies. It also supports Disk Caching read and readwrite.

OS disk size is about 127 GB.

What is availability story around virtual machines?

The service level agreement for 99.95% for multiple role instances(web and worker) which 4.38 hours of downtime/year. Multiple role instance ideally means 2 VM in the same role. So idea is to a minimum of 2 vm in a role. What’s included in the 99.95% is

  • compute hardware failure (disk, cpu, memory),
  • Data Center failure - network and power failure.
  • Hardware upgrades- Software Maintenance – Host OS Updates

What is not included? – VM Container crashes, Guest OS Updates.

What does this SLA means VM ?

It means if one deploys 2 instances of the same virtual machine in the same cloud service (dns name) which the same availability set then one gets a 99.95% SLA.

image

What is the concept of availability set?

By default for every role which has 2 instances Windows Azure create 2 instances in Fault Domain and 2 instances in Update Domain. i.e if you have defined Fault and Update Domain . Fault Domain gets defined on the basis of single point of failure in this case its  the top of rack router.

Fault Domain represents groups of resources which are anticipated to fail together i.e same rack or same server. Fabric spreads instances across fault or at least 2 fault domains.

Update Domain represent groups of resources that can be updated together.

The availability set comes with the same concept of a fault and update domain concept. So for example if you had 2 instance of the same vm defined in an avail. set , you are going to get instances of same vm in fault and update domain i.e a bare minimum. So in all there are 6 instances of VM running.

 image

The story would be incomplete without proper Networking capability of Azure IaaS.What are the options?

So what has MSFT done for Azure IaaS networking, some of the features include

  • Full control over machine names
  • Windows Azure provided DNS- Resolves VM’s by name within the same cloud services. Machine names are modelled and explicitly published in the DNS services
  • Use an on premise DNS Server.

Note: In PaaS Web and Worker communication happens via messaging in the VM world its DNS lookup.

Protocols Supported

  • UDP traffic supported – Load balancing incoming traffics and allows outbound traffic
  • Support all IP Based Protocols (VM to VM communication)- Instance to instance communication TCP, UDP & ICMP.
  • Port Forwarding – Direct communication to multiple VM’s in the same cloud service.
  • Custom Load Balancer Health Probes- Health check with probe timeouts. HTPP based probing, allowing granular control of health checks.

Load Balanced Sets for IaaS

Similar to Avail. Set is Load Balance Sets which allows a set of VM within the same cloud service to be load balanced.

image

Load Balance with Custom Probes

In IaaS there are no agents installed on the VM , so there was a requirement to define a point which could be probed example /health.aspx which is the probe path, if we get an HTTP 200 it assumes everything is healthy.

image

Cross Premise Connectivity

Connecting with on Premise Active Directory or connecting on premise network Windows Azure Connect has been around, but not very well accepted. Windows Azure Connect using IP Sec tunnelling concept and agent hosted on both the machines which need to communicate. If one is doing a domain join with Azure VM the problem has been is to have the Agent install on the DC which has not gone very well with the many.

Alternatively Site to Site Connectivity – Windows Azure Network came into play. It provides a virtual network and gateway. The gateway using a standard VPN device.

image

What would one need to take care when migrating application to Azure IaaS?

  • Sql Server installed on IaaS needs to be clustered so having 2 instances of the Sql Server in the same cloud service will help. One may need size up the data disks required.
  • Built in Load Balancing Support, So if you deploy Web Application on IaaS one can be relieved on LB.
  • Integrated Management and Monitoring provided by Azure itself for VM;s
  • Fault and Upgrade Domain for all VM’s are a must.
  • Windows Azure Network can be used to connect with an on premise application, domain join
  • Hourly Billing Support: In addition to making it easier and faster to get started, these SQL Server and BizTalk Server images also enable an hourly billing model which means you don’t have to pay for an upfront license of these server products – instead you can deploy the images and pay an additional hourly rate above the standard OS rate for the hours you run the software. This provides a very flexible way to get started with no upfront costs (instead you pay only for what you use). You can learn more about the hourly rates .
  • Workloads to be shifted on cloud need to be looked from a compute, storage & networking stand point of view.

image

The philosophy for the cloud world is lift and shift workloads.

As of 2013 we still need to use a lot of application as is include to cloud and IaaS is an inherent part of overall PaaS. May be in years to come it will be a complete PaaS architecture.