SOA Testing Needed to Retain Performance and Reliability Standards
5th Oct 2016
SOA Testing Needed to Retain Performance & Reliability Standards
A look at some of the problems surrounding performance testing in SOA
While it may be impossible to get everyone one the same page concerning an exact definition of SOA (service oriented architecture), at least we can all agree that thorough testing (of SOA) is still an absolute necessity. As with any other form of software, a rigorous testing procedure helps determine where and when potential lapses in performance might occur, so that they can be addressed beforehand. This is especially true when you're dealing with software that behaves like infrastructure of sorts; testing ensures that the system won't break down right in the middle of an extremely busy period. After all, what good is it having an overwhelming number of customers if you are facing the threat of collapse?
First off, let's give credit where it's due and SOA certainly deserves to receive at least a small amount of credit. It is because of SOA that many businesses have been able to establish systems that provide them with a steadily flowing stream of income and / or potential customers. Likewise, SOA has spread out into many industries from manufacturing to healthcare; it is a versatile technology that can be applied in a near infinite number of ways.
So what's the problem? Like other forms of computing infrastructure, you can never be 100% sure about how an application or component is going to react to some criteria until you test it. Simply put, SOA is comprised of a series of what are essentially 'stand alone' components, and they often react to each other in a somewhat unpredictable manner. When testing a SOA, individual components must be tested alone, and in varying combinations with other components; this process will need to be carried out from one end of the infrastructure to the other.
The two principal areas of concern with regards to SOA testing are integration and interoperability. This simply means how well modularized components fit together, and how well they perform with one another over time; this should not be surprising to anyone. But who exactly is responsible for carrying out such testing procedures anyway? Does the organization providing service have to do the testing? Or, is it the job of the business utilizing an SOA to perform their own test(s), either first hand, or by contracting it out to a third party? Because most SOA's are set up and used in creative and individual ways, it may be impossible for a service provider to accurately test for your specific requirements. Sure, service providers can definitely establish baseline interoperability standards for their individual components so that they work and behave correctly, but they cannot predict how you will ultimately implement them. So, you might have to carry out testing yourself, or better yet, hire some experienced testing specialists to do the work for you. By taking the time to examine potential critical flaws in your infrastructure, you might be able to avoid costly mistakes and/or future downtime.
Security is a big concern for businesses relying heavily on internet sales / traffic / services and/or networking. Let's just imagine for a second that each one of your modularized services / components have their own unique set of security protocols to navigate. Each one of these components must not only remain solvent in and of itself, but it must also interact with other services (each having its own set of security protocols); this is a recipe for confusion and very difficult to test as well.
But what about SOA performance, and how do I test it? As with any other type of scenario where multiple pieces of software are interconnected and sharing hardware resources (a common occurrence in cloud computing), there are certain situations to look out for. For one, great care should be given to how much power and resources are distributed among various components. The most obvious things to avoid are; giving too much power to lesser components and too little to that are more prominent. Arguably, most of the problems in this area might be due to an overabundance of software layers being present. With each new layer, the total amount of power and resources is split (either evenly or unevenly), this ultimately translates into ever increasingly complex rules for resource distribution, in turn, leading to performance difficulties. When performance testing, user loads are simulated which provides one with the data needed to evaluate performance in interfacing, communication, presentation, hardware and of course, the individual services themselves.
Because software components are often tested in isolation, they can be virtually error free, but what happens when you combine several of these isolated components into a new structure? What you are often faced with are a series of complications. Conventional testing procedures are not succinct enough for SOA; they often lack the ability to test across multiple services given a set of criteria. This is not to say that conventional procedures could not be used in tandem with, or as part of another solution, it simply means that, by themselves, they lack the structure to get the job done right. Why not create a testing program through cloud computing interfaces that might be able to carry out complex testing procedures (like the ones required for most SOA) in a nearly automatic fashion. These systems would perform a bevy of tests by simulating user activities in multiple proposed scenarios and then output their findings so that technicians can further evaluate the system at large. Changes could then be enacted based on the results of said testing procedures.
A look at some of the problems surrounding performance testing in SOA
While it may be impossible to get everyone one the same page concerning an exact definition of SOA (service oriented architecture), at least we can all agree that thorough testing (of SOA) is still an absolute necessity. As with any other form of software, a rigorous testing procedure helps determine where and when potential lapses in performance might occur, so that they can be addressed beforehand. This is especially true when you're dealing with software that behaves like infrastructure of sorts; testing ensures that the system won't break down right in the middle of an extremely busy period. After all, what good is it having an overwhelming number of customers if you are facing the threat of collapse?
First off, let's give credit where it's due and SOA certainly deserves to receive at least a small amount of credit. It is because of SOA that many businesses have been able to establish systems that provide them with a steadily flowing stream of income and / or potential customers. Likewise, SOA has spread out into many industries from manufacturing to healthcare; it is a versatile technology that can be applied in a near infinite number of ways.
So what's the problem? Like other forms of computing infrastructure, you can never be 100% sure about how an application or component is going to react to some criteria until you test it. Simply put, SOA is comprised of a series of what are essentially 'stand alone' components, and they often react to each other in a somewhat unpredictable manner. When testing a SOA, individual components must be tested alone, and in varying combinations with other components; this process will need to be carried out from one end of the infrastructure to the other.
The two principal areas of concern with regards to SOA testing are integration and interoperability. This simply means how well modularized components fit together, and how well they perform with one another over time; this should not be surprising to anyone. But who exactly is responsible for carrying out such testing procedures anyway? Does the organization providing service have to do the testing? Or, is it the job of the business utilizing an SOA to perform their own test(s), either first hand, or by contracting it out to a third party? Because most SOA's are set up and used in creative and individual ways, it may be impossible for a service provider to accurately test for your specific requirements. Sure, service providers can definitely establish baseline interoperability standards for their individual components so that they work and behave correctly, but they cannot predict how you will ultimately implement them. So, you might have to carry out testing yourself, or better yet, hire some experienced testing specialists to do the work for you. By taking the time to examine potential critical flaws in your infrastructure, you might be able to avoid costly mistakes and/or future downtime.
Security is a big concern for businesses relying heavily on internet sales / traffic / services and/or networking. Let's just imagine for a second that each one of your modularized services / components have their own unique set of security protocols to navigate. Each one of these components must not only remain solvent in and of itself, but it must also interact with other services (each having its own set of security protocols); this is a recipe for confusion and very difficult to test as well.
But what about SOA performance, and how do I test it? As with any other type of scenario where multiple pieces of software are interconnected and sharing hardware resources (a common occurrence in cloud computing), there are certain situations to look out for. For one, great care should be given to how much power and resources are distributed among various components. The most obvious things to avoid are; giving too much power to lesser components and too little to that are more prominent. Arguably, most of the problems in this area might be due to an overabundance of software layers being present. With each new layer, the total amount of power and resources is split (either evenly or unevenly), this ultimately translates into ever increasingly complex rules for resource distribution, in turn, leading to performance difficulties. When performance testing, user loads are simulated which provides one with the data needed to evaluate performance in interfacing, communication, presentation, hardware and of course, the individual services themselves.
Because software components are often tested in isolation, they can be virtually error free, but what happens when you combine several of these isolated components into a new structure? What you are often faced with are a series of complications. Conventional testing procedures are not succinct enough for SOA; they often lack the ability to test across multiple services given a set of criteria. This is not to say that conventional procedures could not be used in tandem with, or as part of another solution, it simply means that, by themselves, they lack the structure to get the job done right. Why not create a testing program through cloud computing interfaces that might be able to carry out complex testing procedures (like the ones required for most SOA) in a nearly automatic fashion. These systems would perform a bevy of tests by simulating user activities in multiple proposed scenarios and then output their findings so that technicians can further evaluate the system at large. Changes could then be enacted based on the results of said testing procedures.
+++
Want to learn more about the business of Cloud Computing and how you can make a difference? Sign up for the Cloud Computing Foundation Program