Testing Wireless Java Applicationsby Qusay H. Mahmoud November 2. Wireless applications written in the Java programming language (wireless Java applications), like all other types of software, must be tested to ensure functionality and usability under all working conditions. Testing is even more important in the wireless world because working conditions vary a lot more than they do for most software. J2ME 101, Part 2: Introduction to MIDP's low-level UI Skill Level: Introductory. Micro Edition (J2ME) and the Mobile Information Device Profile. Both a slightly different language and a (largely) disjoint set of APIs. In addition recent (Java 5) features such as generics. J2ME; JSTL; Frameworks. Frameworks - Index; Hibernate. JSP Simple Examples Addition (+). Overview of Java 2 Platform, Micro Edition (J2ME. 8 CHAPTER 2 OVERVIEW OF JAVA 2 PLATFORM, MICRO EDITION. In addition, there are important. Method overloading is one of. The Basic Structure of a Simple Java program. Selection of software according to 'Simple addition program in j2me' topic. Vector Addition: Math for Java Game Programmers. This program illustrates the addition of two vectors using two different approaches that result in the parallelogram described by Kjell. Java Platform, Micro Edition. Java ME was formerly known as Java 2 Platform, Micro Edition (J2ME). One way to make testing simple is to design applications with. If your J2ME application meets all the. Simple Java game programming for beginner: Simple Java game programing for not code users. Hey guys so I'd really like to learn how to program with Java. Simple Basic Java Game With Source. For example, wireless Java applications are developed on high- end desktop machines but deployed on handheld wireless devices with very different characteristics. The aim of this article is to help you test your wireless applications. The article: Provides an overview of software testing. Describes the challenges in testing wireless applications. Presents a tutorial on testing wireless applications. Furnishes testing checklists for user interface, networking, and other areas. Discusses certification programs for applications targeted at the Java 2 Platform, Micro Edition (J2. ME applications)Overview of Software Testing. Software testing is a systemic process to find differences between the expected behavior of the system specified in the software requirements document and its observed behavior. In other words, it is an activity for finding errors in the software system. There is no one agreed- upon goal of software testing. One school of thought describes the goal of testing as demonstrating that errors are not present. The ultimate goal, however, is to find errors and fix them so users can be confident that they can depend on the software. Errors (also known as bugs or glitches) in software are generally introduced by people involved in software development (including analysts, architects, designers, programmers, and the testers themselves). Examples of errors include: Interface specification: Mismatch between requirements and implementation. Algorithmic faults: Missing initialization, branching errors, or missing tests for null. Mechanical faults: The user manual doesn't match actual conditions or operating procedures. Omissions: Some of the features described in the requirements documents are not implemented. Many developers view the subject of software testing as . Testing is an iterative process and should start from the beginning of the project. Software developers need to get used to the idea of designing software with testing in mind. Some of the new software development methodologies such as e. Xtreme Programming stress incremental development and testing. User interface design, for example, benefits highly from rapid prototyping and testing usability with actual users. One way to make testing simple is to design applications with testing in mind. Organizing the system in a certain way can make it much easier to test. Another implication is that the system must have enough functionality and enough output information to distinguish among the system's different functional features. It is now common to describe a system's functional requirements (features that the system must provide) by using the Unified Modeling Language (UML) to create a use case diagram, then detailing the use cases in a consistent written form. Documenting the various uses of the system in this way simplifies the task of testing the system by allowing the tester to generate test scenarios from the use cases. The scenarios represent all expected paths users will traverse when they use the features that the system must provide. Developers distinguish these functional requirements from system requirements not related to particular functionality, constraints related to performance, configuration, and usability. Testing Activities. The testing that needs to be performed can be split into two classes: functional (black- box) testing and structural (white- box) testing. In black- box testing, each of the components - - and ultimately the system as a whole - - is treated as a black box, and testers verify that it supports all the features identified (often as use cases) in the requirements documents. Black- box testing activities include: Unit (or Class) Testing: In this testing activity, components are tested separately. Because some objects may depend on other objects that are not yet available, you may need to develop test drivers and test stubs. A test driver simulates the part of the system that calls the component under test. A test stub simulates a component called by the tested component. Integration Testing: In this activity, objects are integrated in increasingly large and complex subsystems. This is an incremental testing process. System Testing: In this activity, the system is tested as a whole. Testers employ various techniques at this stage, including functional testing (testing actual behavior against documented requirements), performance testing (testing nonfunctional requirements), and acceptance and installation testing (testing against the project agreement). Black- box testing concerns itself with externally visible behavior, and ignores the source code. Its goal is to identify faults in the implementation by exercising all possible paths through the code at least once. Testers check that every branch in the code has a test that exercises that branch. Fortunately, you don't have to draw flow graphs for your code by hand, as several code coverage tools are readily available. Challenges of Testing Wireless Applications. The wide variety of Java technology- enabled devices such as wireless phones and PDAs results in each device running a different implementation of the CLDC and MIDP. Varying display sizes add to the complexity of the testing process. In addition, some vendors provide proprietary API extensions. As an example, some J2. ME vendors may support only the HTTP protocol, which the MIDP 1. TCP sockets and UDP datagrams, which are optional. To make your application both portable and easy to test, design it using standardized APIs defined through the Java Community Process (JCP), so it will run as- is on devices with different J2. ME implementations. If you feel you must use vendor- specific extensions, design your application in such a way that it defaults to the standard APIs if it's deployed on a device that doesn't support the extensions. Testing Wireless Java Applications. The testing activities described above are applicable to testing wireless Java applications. In other words, you perform unit or class testing, then you integrate components and test them together, and eventually you test the whole system. In this section I provide guidelines for testing wireless applications. Validating the Implementation. Ensuring that the application does what it's supposed to is an iterative process that you must go through during the implementation phase of the project. Part of the validation process can be done in an emulation environment such as the J2. ME Wireless Toolkit, which provides several phone skins and standard input mechanisms. The toolkit's emulation environment does not support all devices and platform extensions, but it allows you to make sure that the application looks appealing and offers a user- friendly interface on a wide range of devices. Once the application has been tested on an emulator, you can move on to the next step and test it on a real device, and in a live network. Usability Testing. In usability testing (or GUI navigation), focus on the external interface and the relationships among the screens of the application. As an example, consider an email application that supports entry and validation of a user name and password, enables the user to read, compose, and send messages, and allows maintenance of related settings, using the screens shown in Figure 1, among others. In this example, start the test at the Login window. Enter a user name and a password and press the soft button labeled Login. Enter a valid user name and password. The main menu should display a Sign. Out button. Press the Sign. Out button. Does the application return to the Login screen? Write yourself a note to raise the question, . The program should display a meaningful message box with an OK button. Does the application return to the Login screen? You need to test the GUI navigation of the entire system, making notes about usability along the way. If, for example, the user must traverse several screens to perform a function that's likely to be very popular, you may wish to consider moving that particular function up the screen layers. Some of the questions you should ask during usability testing include: Is the navigation depth (the number of screens the user must go through) appropriate for each particular function? Does the application minimize text entry - - painful on a wireless phone - - or should it provide more selection menus? Can screens of all supported devices display the content without truncating it? If you expect to deploy the application on foreign devices, does it support international character sets? The MIDP Style Guide provides helpful hints about user interface design. Network Performance Testing. The goal of the next type of testing is to verify that the application performs well in the hardest of conditions (for example, when the battery is low or the phone is passing through a tunnel). Testing performance in an emulated wireless network is very important. The problem with testing in a live wireless network is that so many factors affect the performance of the network itself that you can't repeat the exact test scenarios. In an emulated network environment, it is easy to record the result of a test and repeat it later, after you have modified the application, to verify that the performance of the application has improved. Server- Side Testing. It is very likely that your wireless Java applications will communicate with server- side applications. If your application communicates with servers you control, you have a free hand to test both ends of the application. If it communicates with servers beyond your control (such as quotes.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2016
Categories |