- what do you mean by scalability?
Scalability refers to an application’s performance when there is a change or increase in size or volume of data. Large enterprise applications are built on architecture that would consider application state, load of the system, complexity of request and other parameters such as CPU usage, network usage etc.
Java applications demand more computing power and network bandwidth than single box can provide. The application has to be designed in such a way that it can be easily scaled without affecting the existing functionality of the system.
- What are the parameters to measure application performance?
Application performance is measured with three basic parameters:
- Response time of request – Refers to the time it takes to process the request
- Count of requests process per second – Refers to number of requests that are processed within a defined time interval
- System availability – Refers the system uptime
If any of above parameters is not providing inputs as desired, there is performance issue. To improve performance, hardware or software resources can be added. Adding these resources is termed as scalability.
- When can scalability issues occurs in Java web application?
The scalability issues occur when an external bottleneck is present. In this case, members of the cluster need to wait for obtaining a right to operate on the resource because it can be accessed only by a single cluster node at a time. Following bottlenecks affect Java web applications:
- Database access
- File storage access
- Use of dedicated processing resources
- Explain scalability approaches in Java.
There are 2 approaches to scalability: horizontal scaling and vertical scaling
This approach refers to scaling up a single node by adding hardware resources to it. This is used in virtualized environments where it’s possible to expand system resources dynamically. In case of vertical scalability, any of the application components such as Apache Daemon service requires expansion in resources then they are scaled vertically by switching from small to bigger machines / virtual instances; ie. increase the resource power. For each virtual system, hosting application components, increasing the resource power of virtual systems without down time is archived through virtualization tools.
Sending requests via additional nodes is referred to as horizontal scaling. This can be performed in cloud-based environments. In horizontal scalability for such application components, instances of application components would be running in parallel with parallel active flows. In case of failure of one instance, the remaining instances shall take over the load. In case of increase in load on the existing instances, it is recommended to add a new instance on the fly to the existing pool and the load shall be shared accordingly.
Horizontal scalability in database shall be achieved through partitioning of data/tables across nodes. Other application components are scaled horizontally using programmatic approach / cluster service.
Breaking up application’s services into partitions, or shards is also one of the techniques of horizontal scaling. The partitions can be distributed such that each logical set of functionality is separate. This can be achieved by setting up geographic boundaries, or by another criteria like non-paying versus paying users. The advantage of these schemes is that they provide a service or data store with added capacity.
A distributed caching system maintains a local cache, also known as a front cache, on each cluster member to speed up access to the cached data residing on a remote node and to reduce use of network bandwidth. Cache coherence provided by the distributed caching system ensures that all members of the cluster have a consistent view of the data despite of local caching.
Java web applications often use distributed caching to cache the following data:
- Results of querying a transactional database
- Results of rendering dynamic web pages.
- Results of XSL transformations
- Results of accessing web services