Sunday 14 February 2021

Microservices and Observability

 

1. Log Aggregation - In  a typical microservice architecture, there may be hundreds of servers involved and looking at the logs from relevant servers to debug a failure would be nearly impossible. Typically all the logs from various servers are aggregated to a centralized logging service with searching capability such as ELK stack (Elastic Search, Logstash and Kibana). There can be namespaces to identify the origin of the logs.

2. Distributed tracing - In a microservice architecture, a request could be spread across many microservices and for debugging any issues, we may have to check the logs from multiple services.

A tracing id is to be generated typically by the gateway and passed as a request header to all the services and is to be sent back in the response header. The tracing id can be picked from the response header of the request or any of the log message to identify the remaining part of logs in different services. 

3. Metrics and Alarms - Metrics from the application can be loaded to a centralized metric service. This can be further utilized for setting up alarm notifications for getting the immediate attention based on the severity. Tools like grafana provides a configurable dashboard for the metrics to get an overall picture of how the application has been performing. Typically a canary test also would be deployed along with each services that will generate failure or trigger alarms so that owner of the service will be alerted even before customer faces the issue.

4. Audit Logging - User actions are typically logged for any future auditing purpose like compliance, security, customer support etc.

5. Health Check API - A micro service may have health check API which typically check connections to infrastructure services like DB connection pool, disk space etc. This API is periodically called by a service registry or loadbalancer to identify the healthy service instances.

6. Log Deployment changes - Every deployment changes to production may be logged to identify if a particular issue started occurring after a certain production deployment.

7. Exception Tracking - All exceptions can be reported to a centralized exception tracking service which can notify the developers about failures.

Saturday 13 February 2021

Refactoring monolithic application into Microservices

 

strangler pattern - Monolithic application is incrementally modernized by extracting services from the application untill the monolithic application shrinks to no existence or becomes one of the service in the microservice architecture.

This will ensure that the business served  by the application will not get impacted when the application is being cut over to a microservice architecture. 

Strategies for refactoring a monolith to microservices:

1. Implement new features as services (stop monolith from growing)

While this will help the monolith from growing further and gives the microservice benefits of faster development/deployment, a new feature may not always qualify as a separate service.

API gateway is the minimum requirement for introducing new service as it needs to be in front of the monolith application and the new service so that the request can be routed appropriately.

REST/domain message interactions between the new service and monolithic application is required for getting the data for the new service. 

2. Separate presentation tier and backend

Move the presentation tier to a separate application using Single Page Application frameworks like Angular/React. This will allow the UI development to be decoupled from backend and also tests the REST APIs which are to be directly exposed to the end user.

3. Break up monolith by extracting services based on the business capabilities



Friday 12 February 2021

Microservices - Handling partial failures and Circuit breaker


Circuit Breaker pattern - If a large number of requests are failing or timing out, then trip the circuit breaker so that requests are immediately failed without being forwarded to the particular microservice for a timeout period.

Because making further requests to the service is pointless and will result in consuming valuable resources like threads in the calling application.

After the timeout period allow the requests to the service and if the requests are succeeding, close the circuit breaker.

Netflix Hysterix/Resilence4J are java libraries that implements circuit breaker pattern.

Using circuit breaker is just a part of handling the dependent service failure and we still need to return an appropriate response to the client.

Handling partial failure for an unresponsive service - Returning a cached version of data if available is one way to handle the scenario. For a critical piece of information for the API, if the service is not available, the API need to return an error and for a less critical information for the API, it can be chosen to omit the fields or return cached data if available.


Microservices and Service Discovery

 

Service Discovery pattern - The service instances in a microservice architecture could dynamically change with auto-scaling, failure, upgrade etc. A service discovery pattern is used for identifying the instances corresponding to a microservice.

Key component of service discovery is a service registry which is a database of network locations of instances for a given service.

Service Discovery is achieved using the following patterns:

1. Self registration - A service instance typically registers itself with a service registry and service registry invokes a health check API periodically on the service instance to make sure the service instance is Up and Running. Service instances may also be required to invoke a heartbeat API of the service registry inorder to avoid its lease from expiring.

2. Client side discovery - The client service queries a service registry to get the list of available instances and do the client side load balancing across them using frameworks such as Ribbon in netflix OSS.

3. Server side discovery - When making a request to the service, client makes the request via a router, typically a loadbalancer that also acts as a service registry. The router then forwards the request to available service instance.The router takes care of service registration, discovery and the request routing with load balancing.


Spring-boot Eureka

For implementing Service Registry in spring-boot, one of the option is to use netflix eureka server.

From start.spring.io, the dependency for eureka server can be added for a service registry server.

Use @EnableEurekaServer annotation for enabling eureka service discovery server.

if you add the dependency for Eureka Discovery Client - for registering to service discovery server, you are all set with a service registry with your microservice registered in it.


HashiCorp Consul - provides a service mesh which takes care of the following:

Consul follows a peer to peer architecture and follows gossip protocal.

1. Service discovery server - service registry which maintains how to connect other services and the health of the service.

2. Configuration Server - for managing any configuration across for different microservices. 

3. Segmentation - managing connection between the microservices - using service graph to decide which service can access which service.

it also provides a side car proxy and maintain TLS certificates which enables mTLS communication between two services.

This will make sure data in transit is also encrypted inside a microservice architecture.


Thursday 11 February 2021

Decomposing Application into microservices

 

Decompose by business capability

A business capability is something that a business does in order to generate value.

Capabilities of an online store could include order management, inventory management, shipping, online payment and so on.

It is based on what the business of the organization is.

Each business capability can be thought of as a service and this approach is business oriented.

A business capability or a sub-capability can be a service depending on several factors like whether they are intertwined or represent different phases etc. As the services eveolve, they could be again split into multiple services or even combined.


Decompose by subdomain

This is based on Domain Driven Design (DDD). In DDD, the Software components are mapped to the business domain objects.

This will also help in developing a ubiquitous language that can be used for communication between domain expert, business analyst, software developers and other stake holders. 

A business domain can have several subdomains. Sub domains can be very similar to the business Capabilities.

The domain objects can have different meaning in different subdomains and are limited to a bounded context.

Each subdomains can be mapped to a microservice.


Obstacles for decomposing an application into microservices

1. Network latency - Distributed architecture will result in lot of cross-service calls resulting in  reduced network latency.

2. Synchronous interprocess communication reduces availability - If one of the dependent service is unavailable, it could result in no processing of data even when the service is Up.

3. Maintaining data consistency across services - With separate database for each service, the update of a dependent business domain will require updates in other microservices. Transactional Messaging and SAGA patterns can be used for solving this scenario.

4. Obtaining a consistent view of the data - Decomposition into multiple services will raise the challenge of combining the data from across multiple services while presenting the data to the user.

API composition and CQRS are two patterns that can be used in these scenarios.

5. God classes prevent decomposition - the resource that is key for each of the service and are bloated with numerous responsibilities.

Use Domain Driven Design (DDD) to use the domain representation of the resource and split it across services using the bounded context in which the domain objects reside (Decompose by subdomain).




Saturday 9 January 2021

Microservices - Transactional Messaging and handling 2 phase commit problem

 

In a microservices architecture, when data is updated in one microservice, another service may also require to reflect the updated state or the services may be involved in a SAGA pattern for completing a transactional outcome.

But updating the data in the database and publishing the message is a 2 phase commit problem. The database update could be successful whereas the publishing of the message could fail or vice versa. If one think of storing and republishing the messages only on failure, then this can result in messages going out of order.

The typical way to handle this issue is to use transactional outbox pattern, i.e, store the message to be published in a table every time the data is updated. This would mean that the entire operation can be committed in a single transaction and no 2 phase commit problem is involved.

Now, this message needs to be published from the transactional outbox table to the mesaging broker for the other microservices to take corresponding actions. This can be done via one of the following approaches:

1. Polling publisher: There can be a Polling publisher which frequently checks this transactional outbox table and picks the data in the timeCreated order and publishes the messages in the same order. If the messaging broker is down or not reachable while trying to publish the message, the Polling publisher can do finite retries and quit for the next iteration to publish the message. The message publishing can get delayed slightly with this approach based on the frequency of the Polling publisher. So the frequency of the Polling publisher needs to be adjusted based on the use case. The polling publisher need to delete the data from Transactional Outbox table once the message is published.

2. Transaction Log tailing: There are various connectors available with the messaging brokers like kafka for tailing a source database and publishing the messages and even to a target sink database. Such connectors can be used for tailing the transactional outbox table and the connector can be used for publishing the messages. This approach is reliable but is database specific and a bit complex. So if the minor delay is fine for the use case, polling publisher could be the go to solution.

But Transaction Log tailing may be a requirement for a NOSQL database to handle this scenario. The NOSQL database typically will not be having a transaction and hence to store the messages in a single transaction to the database, the generated message may also be required to be stored in the same table as the one in which domain object is stored. Now the challenge will be to identify the messages that needs to be published for a polling publisher. Using transactional log tailing will help to generate messages for only the newly created/updated data eliminating this challenge. 


Connector frameworks:

Debezium

Linkedin Databus

Dyanamo DB streams

Confluent connectors






Friday 8 January 2021

Microservices and Software Architecture Styles

  Software Architecture Styles

1. Layered Architecture - Application is classified into multiple layers like presentation layer, business layer, persistence layer in a 3 tier architecture.

2. Hexagonal Architecture - Business logic is kept independent of other layers with communications happening via inbound/outbound adapters (Interfaces).

3. Microservices Architecture - The application is divided into multiple services based on domain/maintainability etc and other factors, each service of a microservice architecture typically follows hexagonal architecture.


Advantages of microservices

1. Enables continous delivery and deployment of large complex application via independent services.

2. Independent scaling

3. Fault isolation

4. Independent technology stack and can easily update to latest tech stack

5. Independent CI/CD pipeline and faster deployment/release

6. Smaller services are be easily maintainable

7. Enables teams to be autonomous

Drawbacks of microservices architecture

1. Finding the right set of services is challenging

2. Features spanning multiple services requires careful co-ordination:

 a. how to handle transaction across services (Saga)

 b. how to query data from multiple microservices (CQRS/API Composition)

 c. may require careful rollout plan with involved microservices to not break backward compatibility etc.

3. Testing and deployment can be difficult when multiple services are involved.


SOA Vs microservices

SOA uses SOAP based webservices where as the microservices use lighter REST based or gRPC based calls.

SOA used to integrate large monolithic applications where as services are broken down as multiple micro services in a microservice architecture.

SOA based architecture have single shared database for the entire application where as microservices have its own database.


Thursday 7 January 2021

Write a program to print longest sequence of each numbers in a given array

 

Sample input array: {1,1,2,2,2,3,3,2,2,2,2,3,3,3,4,1,1,1}


Expected output:

1 occurred 3 times max in sequence

2 occurred 4 times max in sequence

3 occurred 3 times max in sequence

4 occurred 1 times max in sequence


https://github.com/prasune/Algorithms/tree/master/src/main/java/com/test/algorithm


package com.test.algorithm;

import java.util.HashMap;
import java.util.Map;

public class MaxRepeatingOccurrence {

public static void main(String[] args) {
int[] nums = new int[]{1, 1, 2, 2, 2, 3, 3, 2, 2, 2, 2, 3, 3, 3, 4, 1, 1, 1};

printMaxRepeatingOccurrences(nums);
}

private static void printMaxRepeatingOccurrences(int[] nums) {
Map<Integer, Integer> numOccurrenceMap = new HashMap<>();
int currentSequenceCount = 0;
for (int i = 0; i < nums.length; i++) {
currentSequenceCount = currentSequenceCount + 1;
if (i == nums.length - 1 || nums[i] != nums[i + 1]) {
if (numOccurrenceMap.get(nums[i]) == null || currentSequenceCount > numOccurrenceMap.get(nums[i])) {
numOccurrenceMap.put(nums[i], currentSequenceCount);
}
currentSequenceCount = 0;
}
}
for (Map.Entry<Integer, Integer> numOccurrenceEntry : numOccurrenceMap.entrySet()) {
System.out.println(numOccurrenceEntry.getKey() + " occurred " + numOccurrenceEntry.getValue() + " times max in sequence");
}
}
}

Write a program to print combination of numbers in an array that gives a target sum

 

A set of unique sorted numbers are given as an input like - {1,2,4,6,9,10}

Write a program to print all combination of numbers that gives a target sum, say 15

In this case, the answer should be:

10 4 1

9 4 2

9 6


https://github.com/prasune/Algorithms/tree/master/src/main/java/com/test/algorithm/recursion


package com.test.algorithm.recursion;


public class FindSumCombinations {

public static void main(String[] args) {
int[] nums = new int[]{1,2,4,6,9,10};
int target = 15;

printCombinations(nums, target);
}

private static void printCombinations(int[] nums, int target) {
for (int i = 0; i < nums.length; i++) {
if (nums[i] <= target && printCombinations(nums, target - nums[i], i+1)) {
System.out.print(nums[i]);
System.out.println();
}
}
}

private static boolean printCombinations(int[] nums, int target, int startIndex) {
for (int i = startIndex; i < nums.length; i++) {
int newTarget = target - nums[i];
if (newTarget > 0) {
if (printCombinations(nums, newTarget, i+1)) {
System.out.print(nums[i] + " ");
return true;
}
} else if (newTarget == 0) {
System.out.print(nums[i] + " ");
return true;
} else {
return false;
}
}
return false;
}
}