Tag Archives: DevOps

Technology Highlights Google Cloud Next 2017

Google cloud next-2017 The largest Google developer and IT gathering in Amsterdam to explore the latest developments in cloud technology. A chance to engage with the foremost minds leading the cloud revolution and learn how the modern enterprise is benefiting from the latest in cloud technology in unprecedented ways. As usual for us one more way to keep up with technology.

We saw some very interesting new innovations (spanner and app maker to name two) and how they relate to application development in the cloud.

Given below are the other highlights of the technologies talked about during the event:

1) Microservices & Kubernetes:

Microservices – is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. It enables the continuous delivery/deployment of large, complex applications and enables an organization to evolve its technology stack and can develop and deploy faster. It’s an evolution of software development and deployment that embraces DevOps and containers and breaks applications down to smaller individual components.

The emerging combination of micro-service architectures, Docker containers, programmable infrastructure, cloud, and modern Continuous Delivery (CD) techniques have enabled a true paradigm shift for delivering business value through software development.

The combination of microservices and containers promotes a totally different vision of how services change application development.

Kubernetes – is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and donated to the cloud native computing foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It supports a range of container tools, including Docker.

Technology Highlights in Google Cloud Next 2017

2) Choosing the right compute option in a cloud project: a decision tree

To understand the trade-offs and decide which models are the best fit for your systems as well as how the models map to Cloud services —Compute Engine, Container Engine, App Engine, cloud functions.

Compute Engine is an Infrastructure-as-a-Service. The developer has to create and configure their own virtual machine instances. It gives them more flexibility and generally costs much less than App Engine. The drawback is that the developer has to manage their app and virtual machines yourself.

Container Engine is another level above Compute Engine, i.e. it’s cluster of several Compute Engine instances which can be centrally managed.

App Engine is a Platform-as-a-Service. It means that the developer can simply deploy their code, and the platform does everything else for them.

Cloud Functions is a serverless computing service, the next level up from App Engine in terms of abstraction. It allows developers to deploy bite-size pieces of code that execute in response to different events, which may include HTTP requests, changes in Cloud Storage, etc.

Technology Highlights in Google Cloud Next 2017

3) Big data – Big data refers to data that would typically be too expensive to store, manage, and analyse using traditional (relational and/or monolithic) database systems. Usually, such systems are cost-inefficient because of their inflexibility for storing unstructured data (such as images, text, and video), accommodating “high-velocity” (real-time) data, or scaling to support very large (petabyte-scale) data volumes. There are new approaches to managing and processing big data, including Apache Hadoop and NoSQL database systems. However, those options often prove to be complex to deploy, manage, and use in an on-premise situation.

Cloud computing offers access to data storage, processing, and analytics on a more scalable, flexible, cost-effective, and even secure basis than can be achieved with an on-premise deployment.

What’s Next in Google Cloud Development?

Amazon Web Services (AWS), Microsoft Azure, and Google Cloud dominate currently the public cloud market when it comes to IaaS (Infrastructure as a Service) and PaaS (Platform as a Service). Although Amazon is still the undisputed leader in the cloud market, Microsoft’s and Google’s cloud offering is rapidly growing.

With great interest, and as usual to keep up with technology, representatives from the Uniface development team visited earlier this year already events from AWS and Microsoft Azure. An opportunity to visit another cloud event of the third biggest player in the cloud market was on 21st of June in De Kromhouthal in Amsterdam.

What’s Next in Google Cloud Development?

Along with us, there were thousands of others that attended this event, which had an overwhelming interest in an inspiring location in Amsterdam, with a beautiful view on the IJ. Google introduced itself as a cloud partner for enterprises and as a good alternative for Microsoft Office suite. For Google Cloud, data is managed and stored in giant data centers around the world, with one of their biggest data center in the Netherlands.

What’s Next in Google Cloud Development?

The event started with keynotes that gave us already the overview of technologies that will be touched on in the rest of the day, including resilient microservices; machine learning API’s; Google assistant with API.AI; Cloud functions; Spark and Hadoop; Cloud Bigtable; Cloud Spanner; and cloud security. In three different tracks, all these and other (new) technologies were discussed, whereby customer cases were discussed from, including ING and World Press Photo.

What’s Next in Google Cloud Development?

After the keynotes, most of us went to sessions about microservices with Containers, Kubernetes and Cloud Containers. It was an interesting session about creating clusters of load-balanced microservices that are resilient and self-healing. With a group of Compute Engine instances running Kubernetes, an open-source container management system, you can fully automate the deployment, operations, and scaling of containerized applications.

In the rest of the day, sessions were attended about building Firebase apps; building chat bots with machine learning; securing your cloud infrastructure; processing big data with BigQuery; and the mission-critical, relational and scalable application database Google Cloud Spanner.

Although all were, for us as technology addicts, very interesting, here a few things that are worth sharing with you:

  • In general, 99% of vulnerability exploits had patches more than a year old. So, keeping your software and infrastructure of to date is one of the best security measures you can take. (Read more here.)
  • With Google, you are able to query a petabyte(!) sized database in over a few minutes with plain SQL queries using Google BigQuery.
  • As a machine-learning expert, Google has opened API’s for computer vision, predictive modeling, natural language understanding and speech recognition, which you can embed in your smart enterprise applications.
  • Google Cloud has a more flexible approach to billing, as you can pay services you use per minute, and not as competitors allow, only on an hourly base.

All in all, it was a great day with plenty of information on the forefront of developing cutting-edge enterprise applications.

Support for Uniface in the cloud: a DevOps project

For the last few months we have been working towards adding cloud providers to the Product Availability Matrix (PAM). This project is known internally as Cloud Phase 1 and has proven to be, on the whole, a DevOps project.

DevOps

For us to add support for a platform there are several things that we must do – the most important of which is to test it, on every platform, for every build, to make sure it works. The framework we use to test Uniface is a custom-built application with the imaginative name of RT (Regression Test) and contains tests targeted at proving the Uniface functionality. The tests have been built up or added to as new functionality is added, enhanced or maintained.

Up until the cloud project, the process of building and testing Uniface (and this is a very simplistic description) was to:

  • Create a Uniface installation by collecting the compiled objects from various build output locations (we have both 3GL and Uniface Script)
  • Compile the RT application and tests using the newly created version
  • Run the test suite
  • Analyze the output for failures and if successful
    • Create the installable distribution (E-Dist or Patch)

The testing and building was completed on pre-configured (virtual) machines with databases and other 3rd party applications already installed.

To add a new platform (or versions of existing platform) to our support matrix could mean manually creating a whole new machine, from scratch, that represents the new platform.

To extend support onto cloud platforms, we have some new dimensions to consider

  • The test platform needs to be decoupled from the build machine as we need build in-house and test in the cloud
  • Tests need to run on the same platform (i.e. CentOS) but in different providers (Azure, AWS, …)
  • Support for constantly updating Relational Database Service (RDS) type databases needs to be added
  • The environment needs to be scalable with the ability to run multiple test runs in parallel
  • It has to be easily maintainable

As we are going to be supporting the platforms on various cloud providers, we decided to use DevOps methodologies and the tools most common for this type of work. The process, for each provider and platform, now looks like this:

  • Template machine images are created at regular intervals using Packer. Ansible is used to script the installation of the base packages that are always required
  • Test pipelines are controlled using Jenkins
  • Machine instances (based on the pre-created packer image) and other cloud resources (like networks and storage) are created and destroyed using Terraform
  • Ansible is used to install Uniface from the distribution media and, if needed, overlay the patch we are testing
  • The RT application is installed using rsync and Ansible
  • RT is then executed one test suite at a time with Ansible dynamically configuring the environment
  • Docker containers are used to make available any 3rd party software and services we need for individual tests and they are only started if the current test needs them. Examples of containers we have made available to the test framework are mail server, proxy server, webserver and LDAP server
  • Assets such as log files are returned to the Jenkins server from the cloud based virtual machine using rsync
  • The results from Jenkins and the cloud tests are combined along with the results from our standard internal test run to give an overview of all the test results.

As quite large chunks of the processing are executed repeatedly (e.g. configure and run a test) we have grouped the steps together and wrapped them with make.

As most of the platforms go through the same process we have also been able to parameterize each step. This should mean that a new platform or database to test on, after the distribution becomes available, “could” be a simple as adding a new configuration file.

The result of Phase 1 of the Cloud project is that the Product Availability Matrix has been extended to include new platforms and databases. Internally we also have the benefit of having a much more scalable and extendable testing framework.

The new platforms added to the PAM in 9.7.04 and 10.2.02 by the cloud project:

Uniface Deployment-in-Cloud

In this initial phase, we have been concentrating on the Linux platforms; next (in Phase 2) we will be working on Windows and MS SQL server.

During this process, I have learnt a lot about our test framework and the tests it runs. Much of the work we have undertaken has just been a case of lifting what we already have and scripting its execution. This has not been the case for everything, there have been some challenges. An example of something that has been more complex than expected is testing LDAP. The existing environment would use a single installation of LDAP for every platform being tested. Tests would connect to this server and use it to check the functionality. As the tests are for both read and write, we could only allow one test to be active at a time; other tests and platforms would have to wait until the LDAP server was released and become available before continuing. With the cloud framework, we have an isolated instance of the service for each test that needs it.

The project to bring cloud support to Uniface has been an interesting one. As well as allowing us to add new platforms and providers onto our supported matrix, it has also allowed us to be more scalable and flexible when testing Uniface.