Tag Archives: Architecture

Real Life Scrum: A Presentation to Technology Students in Amsterdam

On Tuesday December 1st, Uniface was invited to deliver a guest lecture at the Technical School in Amsterdam for the students who have ambition in the Technology area. From Uniface Berry Kuijer, JiaoJiao Xia and I were the representatives to share their knowledge, expertise and real-life experiences with the students.

The presentation started with an ‘Introduction of Uniface’ focusing on few key points, by Berry Kuijer:
-History and vision of Uniface
-Customers and the market span
-Business model
-Development and deployment
-A live Uniface application development demo

Next in line were Jiaojiao and I to give a presentation about “Scrum in Uniface lab.” We briefed the students about the software development methodology used for managing the product development in Uniface. First we explained how the agile-scrum methodology are being used in Uniface Lab, by maximizing the team’s ability to deliver quickly, to respond to emerging requirements and to adapt to evolving technologies and changes in market conditions.

Second part of the presentation was about showing the ‘bigger picture’ of scrum by giving a real life example and how to apply it in daily life. We gave an example of what started off as a simple story in the lab and then showed how it became more complex and larger because of additional requirements. We wanted the students to understand that in the “real world” of software development, you can’t always foresee everything in the beginning. With the example we gave—we realized more about the complexity of a feature and its impact on the existing software architecture while started working on the feature, and therefore how a particular requirement would than emerge bigger than it appeared before. There are of course many other examples of how things can change during the process of building software.

Uniface Lecture Team

So we took them from defining the features (user stories) into the wish list (product backlog), proceeded by planning those into workable timeslots (sprints) while reviewing the progress on the daily scrums to present the finished result in the review or demo meeting. Concluding with the retrospective to reflect and learn how to improve the process by knowing what went well and what can be improved. This was emphasized while reviewing examples on how it worked in practice in our teams and with our product owners in Uniface. Finally, we told them that they could apply this methodology to their team assignments and their current studies or practices.

The presentation was well received by the students as the whole atmosphere was stimulated by interaction and interesting questions coming from the students. The concept of SCRUM was very clear and they could relate to our “real life” example when requirements change, get larger, etc. It was a great experience for us and we also believe that the students benefited from being exposed to the perspectives that the Uniface guest lecture team provided.

When thinking Desktop “first” still matters

By Clive Howard, Principal AnalystCreative Intellect Consulting

A few months back, I registered for Mobile World Congress 2015 in Barcelona. As an Analyst, there is a different registration process to the one used for regular attendees. This is so the organisers can validate that someone is a legitimate industry analyst. As well as entering a significant amount of personal data, additional information such as links to published work and document uploads are also required. Crucially, there are a number of screens to complete the registration and accreditation process. But more to the point, many different types of data must be entered – from single and multiple line text entry to file uploads. Some data (such as hyperlinks) requires cut and pasting.

I’m sure that I could have done this using a mobile phone but it would have taken a long time, been awkward and irritating and probably highly prone to mistakes. In short, I would never have considered doing something like this using my phone. Could I have used a tablet? Without a keyboard and mouse it would have been problematic, especially if the screen is small. Using a tablet only Operating System might also have had its problems in places: such as uploading documents from centrally managed systems. Actually I did use a tablet but one connected to a 20inch monitor, keyboard and mouse and running Windows. In that traditional desktop looking environment the process was relatively quick and painless.

Rumours of the desktop’s demise are greatly exaggerated

It is not just complex data entry scenarios such as this that challenge mobile devices. Increasingly I see people attach keyboards to their tablets and even phones. Once one moves beyond writing a Tweet or one line email many mobile devices start to become a pain to use. The reality of our lives, especially at work, is that we often have to enter data into complex processes. Mobile can be an excellent complement, but not a replacement. This is why we see so many mobile business apps providing only a tiny subset of functionality found in the desktop alternative; or they are apps that extend desktop application capabilities rather than replicate or replace them.

One vendor known for their mobile first mantra recently showed off a preview version for one of its best known applications. This upgrade has been redesigned from the ground up. When I asked if it worked on mobile the answer was no, they added (quite rightly) no one is going to use this application on a mobile device. These situations made me think about how over the last couple of years we have heard relentlessly about designing “mobile first”. As developers we should build for mobile and then expand out to the desktop. The clear implication has been that the desktop’s days are over.

This is very far from the truth. Not only will people continue to support the vast number of legacy desktop applications but will definitely be building new ones. Essentially, there will continue to be applications that are inherently “desktop first”. This statement should not be taken to mean that desktop application development remains business as usual. A new desktop application may still spawn mobile apps and need to support multiple operating systems and form factors. It may even need to engage in the Internet of Things.

The days of building just for the desktop safe in the knowledge that all users will be running the same PC environment (down to the keyboard style and monitor size) are gone in many if not the majority of cases. Remember that a desktop application may still be a browser based application, but one that works best on a desktop. And with the growth of devices such as hybrid laptop/tablet combinations, a desktop application could still have to work on a smaller screen that has touch capabilities.

It’s the desktop, but not as we know it

This means that architects, developers and designers need to modernise. Architects will need to design modern Service Orientated Architectures (SOA) that both expose and consume APIs (Application Programming Interfaces). SOA has been around for some time but has become more complex in recent years. For many years it meant creating a layer of SOAP (Simple Object Access Protocol) Web Services that your in-house development teams would consume. Now it is likely to mean RESTful services utilising JSON (JavaScript Object Notation) formatted data and potentially being consumed by developers outside of your organisation. API management, security, discovery, introspection and versioning will all be critical considerations.

Developers will equally need to become familiar with working against web services APIs instead of the more traditional approach where application code talked directly to a database. They will also need to be able to create APIs for others to consume. Pulling applications together from a disparate collection of micro services (some hosted in the cloud) will become de rigueur. If they do not have skills that span different development platforms then they will at least need to have an appreciation for them. One of the problems with mobile development inside enterprise has been developers building SOAP Web Services without knowing how difficult these have been to consume from iOS apps. Different developers communities will need to engage with one another far more than they have done in the past.

Those who work with the data layer will not be spared change. Big Data will affect the way in which some data is stored, managed and queried, while NoSQL data stores will become more commonplace. The burden placed on data stores by major increases in the levels of access caused by having more requests coming from more places will require highly optimised data access operations. The difference between data that is accessed a lot for read-only purposes and data which needs to be changed will be highly significant. We are seeing this with banking apps where certain data such as a customer’s balance will be handled differently compared to data involved in transactions. Data caching, perhaps in the cloud, is a popular mechanism for handling the read-only data.

Continuation of the Testing challenge

Testing will need to take into account the new architecture, design paradigms and potential end user scenarios. Test methodologies and tools will need to adapt and change to do this. The application stack is becoming increasingly complex. A time delay experienced within the application UI may be the result of a micro service deep in the system’s backend. Testing therefore needs to cover the whole stack – a long time challenge for many tools out there on the market – and the architects and developers will need to make sure that failures in third party services are managed gracefully. One major vendor had a significant outage of a new Cloud product within the first few days of launch due to a dependency on a third party service and they had not accounted for failure.

Modelling: Essential Not Optional (Part 1)

By Ian Murphy, Principal Analyst and Bola Rotibi, Research Director, Creative Intellect Consulting

As a relatively new engineering discipline, software development has been looking for a way to improve the quality and cut the cost of what it does. There are good reasons for this, multi-tier, computing systems can take hundreds of man years of effort and cost tens of millions of dollars to build. These systems have multiple points of integration with other pieces of software. Being able to model this complexity helps architects, developers and system engineers to examine the application before they start and make decisions as to how to approach the project.

The use of models in engineering disciplines has been going on for millennia. At a high level, models provide a view that abstract unnecessary detail to deliver an uncluttered picture of a solution for respective stakeholder audiences. Models are used to create a proof of concept that allows architects and engineers to study complex problems before time and money are committed to construction. For example, modelling a bridge over a canyon would enable architects to see how the bridge would look and highlight any issues with the materials used and the terrain.

There are of course varying levels of detail that different model layers will then go onto present that depict all relevant artefacts, along with their relationship to and dependencies on, each other.

You might think, therefore, that the use of modelling inside IT departments would be rife with significant attention and investment paid to its use. Sadly this is not always the case for too many organisations. Yes, lots of teams will use models and modelling in some aspects of the development and delivery process, but it will not be consistently applied or formally implemented.  Despite efforts to drive wider use of modelling in software development, the number of companies that actually do modelling as a core and formal function of their development and delivery processes are few and far between. So why is that?

Common modelling failures

There are three common failures of modelling that lead to them being dismissed as unusable:

  • The first is that the model offers too simplistic a view for the different stakeholders involved. It doesn’t provide the right level of basic information for the various viewpoints required with the result that little to no understanding of any problems can be ascertained from it.
  • The second is that the model is too detailed making it hard to abstract the relevant information for a particular viewpoint, easily and quickly, and making it hard to understand problems from a higher perspective. It might seem that when modelling something as complex as a multi-tier CRM product, there is no such thing as too much detail, but there is.
  • The third is the incompleteness of models to allow for automatic transformation of the visual representation into executable working code.

Modelling 101: 5 points of effectiveness  

Ultimately, the main objective of a model is to communicate a design more clearly: allowing stakeholders to see the bigger picture i.e. the whole system; assess different options, costs and risks before embarking on actual construction.  To achieve these there are five key characteristics that a model must look to convey:

  • Abstraction: A model must allow you to abstract the problem into simple meaningful representation. If you look at architectural building models, they use blocks to represent buildings. In an IT sense, the initial model will simply show connections between systems but not the underlying coding detail.
  • Understanding: Having abstracted the problem, what the model must then convey is sufficient information and detail in a way that is easy to understand for the different audiences concerned.
  • Accuracy: If a model is not an accurate representation of the problem then it is useless. A model would provide a useful start point to get a common view.
  • Alert for prediction: A model alone cannot necessarily provide all the key details of a problem. In modelling a banking system there would be a requirement to use additional tools to predict workload and throughput. This would have to be done in order to select the right hardware, design the correct network architecture and ensure that the software can scale to the predicted capacity demand. One common failure of many IT systems is that they are often under-scaled. If properly modelled with a clear prediction step, this problem would be reduced.
  • Implementation and execution cost: Models need to be cheap to produce and easy to change, especially in comparison to the cost of the product or solution delivered by the model. 

Legacy: Old technology that frightens developers (part 1)

By Clive Howard, Principal Practitioner Analyst, Creative Intellect Consulting

To developers the term legacy is often a dirty word meaning old software that is a pain to work with. Ironically, of course, it is the software that developers spend most of their time working with and developers made it what it is. The question all developers should ask is why legacy software is generally considered to be bad and what can be done to avoid this situation in future? After all an application released today will be legacy tomorrow.

Development teams do not set out to create bad software that will become difficult to maintain, support and extend. When allowed by their tyrannical masters architects and developers put a lot of work in upfront to try and avoid the typical problems of bloat, technical debt and bugs that they fear happening later. For some reason over the years these problems seem to have become inevitabilities.

The type of issues that make developers fear working with legacy include: technologies that are no longer fit for purposes; bloated codebases that are impossible to understand; different patterns used to achieve the same outcome; lack of documentation; inexplicable hacks and workarounds; and a lack of consistency plus many, many more. Most of these have their roots in a combination of design and coding.

Design theory does not always reflect reality

Architects aim to design clean, performant, scalable and extensible applications. Modern applications are complex involving multiple “layers” often distributed from a hardware perspective and including third party and/or existing legacy applications. Different components will frequently be the responsibility of different development teams working in different programming languages and tools.

For some time now the principle of separation has been applied to try and avoid the tightly coupled client/server applications of the past that were known to cause many legacy issues. This has gone under many guises, “separation of concerns”, n-tier, Service Orientated Architecture (SOA) and so on. They are all variants of the same concept that the more separated out the components of an application are the more flexible, scalable, extensible and testable that application will be. For developers having an application made of smaller parts makes it more manageable from a code perspective.

One of the classic scenarios is the interchangeable database idea. An application might start life using one database, but later on it needs to change to another. The concept of ODBC meant that it was easy to simply change a connection string in code and providing the new database had the same structure as the previous everything would continue without a hitch. The problem has been that what looks good theoretically doesn’t hold up in reality.

In the example of changing the database the reality often meant that there were a number of stored procedures, triggers or functions included in the database. Changing from one database to another meant porting these and that in itself can be a significant task. The time and therefore cost of such an activity resulted in the old database continuing. Hence today we find so many applications running unsuitable databases such as Access or Filemaker. A developer then has the frustration of having to work with inherently limiting and non-performant code.

No immunity from separatist design strategies

If we move forward to many of today’s architecture patterns such as SOA we still see similar problems. The concept of SOA is that components of an application become loosely coupled and so different parts of the application are less wedded to one another. Unfortunately within the separate services and consumers the same problems as outlined above can apply.

Worse than that is many service providers do not version their services. Google Maps will often bring out a new version of their service and clients calling the previous version will continue to function. However many others (social networks take note) do not follow this practice and frequently push out breaking changes to their service. This introduces a whole new problem into legacy applications whereby developers have to regularly go back into code and update it to work with the changes to the service.

Enabling the Mobile App? (Part 2)

Guest Post by Clive Howard, Principal Practitioner Analyst, Creative Intellect Consulting

Read Part 1: Enabling the Mobile App?

A new architecture for a new world

As seen in the previous post the real challenge is what lies behind the app. The solution, for many, is to move to a new type of architecture where code can be shared and re-used across many different use cases. Business logic needs to be contained in one place which all client applications reuse. Client applications become essentially UI developed for the specific environment in which they run (phone, tablet, desktop and so on).

Developers only need to maintain the majority of an application’s functional requirements within a single code base around which they can build a suite of tests (such as Unit Tests), implement security and manage a single deployment process. Like the business case for hybrid the case for such a new architecture is compelling as it reduces time and cost over the Application Lifecycle. In addition it becomes faster and easier to develop and deploy new clients.

One such approach is Service Orientated Architecture (SOA) which makes use of a middleware layer of Web Services. Web Services have been around for some time and there are a number of frameworks and tools for creating them both from vendors such as Microsoft and IBM and Open Source solutions. The traditional Web Service used the Simple Object Access Protocol (SOAP) which is still popular within the enterprise today. Whilst SOAP had developer benefits such as the “WSDL” which made it easy to discover service methods and data structures it requires a lot of XML formatted data to be passed between client and server.

The world of mobile has low bandwidth networks and high cost data charges and so a far more light weight data transfer method was needed. The most popular of these emerged from the Representational State Transfer (REST) based approach which uses JavaScript Object Notation (JSON) to structure data.

Any developer working with SOA to support mobile apps must consider the size and speed of data over the wire or risk users incurring prohibitive usage costs and poor app performance. Therefore when building for devices it is important to not just consider the code within the app itself but also the communication with the server and the performance and security of the server based code.

Developers need to consider their responsibilities in creating backend services. An example is that in traditional client/server development it has been easy to have databases return large record sets to the client (in the event that any of those fields are needed in future). When data starts moving across mobile networks it is not just the format that is important but the volume. If an app only needs five columns from a database table then only return five as that will keep the data packet size to a minimum.

Use tools such as mobile simulators to mimic different types of network and bandwidth availability to check the impact on app performance and optimise accordingly. Don’t forget the “no network connection” scenario and handling any offline data change operations.

Choosing the right tools and frameworks should help developers create, analyse and optimise all areas of development and so create secure, high performing and usable apps.

Building for success beyond today’s mobile needs

The challenge for organisations is how fast they can move to this type of architecture to support the burgeoning suite of mobile apps that their business require. Many will look for ways to stop-gap the situation to roll out mobile apps whilst addressing the bigger architectural shift. That approach may require wasted effort as interim solutions are scrapped later but could provide useful learning opportunities. Existing technology choices will dictate how and therefore how fast this transition can be made.

As we move into a rapidly changing world of devices it would behove organisations to adopt technology stacks that enable not just the ability to share data and logic with multiple user endpoints but also to be deployed to the Cloud.

Smart IT functions that get this transition right will deliver significant competitive advantage and cost savings for the businesses. We are only at the beginning of a wave of new devices and functional requirements that extend applications beyond the company firewall.

This is especially relevant within the enterprise which has historically been able to move slowly in adopting IT trends. Now (and increasingly going forward) they will be under pressure from inside and outside the organisation to move far more quickly. The decisions they make now to support mobile may have repercussions for some time to come.