Data rules modern business – from customer care systems to transaction and billing systems to e-commerce and logistics.

It is on application reaction time, data access speed, and online report generation from transactional systems that the quality of customer service, sales and the possibility to provide services often depend.

To meet market demands, business applications must constantly evolve. This usually goes hand in hand with the appearance of performance issues in the systems which are being created, and in extreme situations with a radical drop in effectiveness, including an erosion of the performance of database environments.

In a situation where demands on a business application are rising, and simultaneously there is a drastic jump in the number of operations performed, we face a dilemma:
How can we deal with the growing server load in data processing operations and the work of many applications?
The typical approach to solve this problem offered by application creators, and equipment and software providers is:
Buy servers with higher performance parameters or invest in other types of equipment such a disk matrixes.

Typically, better equipment, buy increasing CPU speed, improves application performance. But this improvement has a linear character and does not result in a radical, qualitative change in the operational parameters of the system. Along with significant investments in equipment, the company expects an increase in performance of 100 or 200%, yet modernisation of the equipment rarely results in such an effect over the long term.

Find the source of the problem

The two types of systems are those which are transactional (OLTP systems which execute many small transactions) and analytical (OLAP systems where reporting processes read large amounts of data from system disks). One of the most common causes of delays in data processing are queries which lead to a high load on the processor (CPU). This is true especially in all types of billing systems or warehouse systems which operate with millions of items. In the situation where the company’s internal IT team has already conducted basic optimisation operations, the company is faced with a choice:

  • Purchase new equipment and new database licences (the costs can be very high)
  • Order optimisation services from a business software optimisation provider (Oracle, Microsoft)

But neither of these solutions gives a guarantee that the problems will be solved or that a radical jump in performance will be achieved.

The universal problem remains: to change the plan of SQL query execution by the database engine.

Application providers often search for problems ad hoc, that is looking at the problem only from the perspective of the application instead of attempting to optimise the entire system, that is the application and the databases which work in it. A common problem is presented by a change in the execution plan of SQL queries, which are not monitored by the application owners. Typical database administration tools do not provide sufficiently detailed statistics on the load on particular elements of the database, I/O operations, or the operating system. Moreover, to reach the right conclusions on the operation of the database, its operations should be analysed over a long period, and solutions should be based on trends which indicate improvement or deterioration of its performance.

DBPLUS offers analysis of the operation of databases over the long term, and implements optimisation up to a set level defined together with the client with a guarantee of the quality of the service.

Thanks to dedicated software for monitoring and optimising queries in a given application which create performance problems, we can develop an optimisation plan for queries and database structure, as well as prepare recommendations for optimisation of business processes.

This can result in as much as a one-hundredfold acceleration in the operation of business applications without significant modification of code or the involvement of programming departments.

How does it work?

  • Precise measurement of resource usage by particular SQL queries, amount of CPU occupied, and I/O operations makes it possible to discover areas where the implemented optimisations is effective and where there is more room for improvement.
  • Most operations related to this optimisation concern changes in the execution plan of SQL queries.
  • With these changes, it is not always necessary to change indexes or add objects to the database to achieve better performance.
  • Although these changes may seem minor, they generate huge results, as they reduce the load on certain objects, reducing for example usage of the processor for data sorting of the amount of data read and sent from disk resources.

CPU overload

Frequently, individual elements of the database utilise the processor to an excessive degree, and the usage report shows 100% used resources. The typical response of the client to solve this problem is to:

  • buy new equipment with higher performance
  • buy new processors for existing servers

However, it often occurs that this significant jump in processor usage is the work of one small element. Optimisation by system administrator is possible only when they have access to a report showing how calculation power has been divided among the various elements of the database engine in a historical perspective.

Guaranteed effects

The experience gained by DBPLUS in many environments and platforms shows that most performance problems are repeated over time, regardless of expenditure on the operating environment in which the application functions. The basis of our method of optimisation is collecting the largest amount of information possible on the operation of the database in a specified time period. After analysing these data, we can indicate the source of the performance problem, where it may be caused by excessive data reading times from disk devices or from improper GQL query syntax. Using DBPLUS Performance Monitor™, we measure these times and show with a single-second precision whether the problem is in the database or in the disk system, and as a result we can show queries which are responsible for the performance problem.

In contrast to the purchase of servers, DBPLUS optimisation guarantees that the defined performance parameters will be achieved.

The measure of how we conduct our optimisation services is how the business process is realised in the time indicated by the client. Additionally, we set the technical benchmarks together with the client, such as CPU usage or I/O devices such as disks.

A permanent guarantee of performance of databases entrusted to DBPLUS.

This is a service which we offer to our long-term clients involving the maintenance of a defined level of performance over a set time. The essence of this service is monitoring of database performance which is used to prepare changes. After acceptance by the client, DBPLUS implements the proposed changes. The quality of the service is guaranteed by a constant comparison of the defined performance indicators with the parameters expected by the client.