Reading the article below just transported me back to year 2 "Controls" module with flashbacks of Laplace Transformations and Open/Closed Loops going on in my mind. It's an occupational hazard now that you've studied Aerospace Engineering, you tend to notice more and be more aware of things that happen during a flight. You're no longer one of the "meek sheep" the airlines cart from place to place.
What is really highlighted below is that even with the popular advent of UAVs, the possibility of having unmanned airliners plying the skies is not going to be possible anywhere in the near future. Barring technical and safety issues, there is still the psychological hurdle to overcome. Look at the rail industry - it took centuries before the automated trains of today became widely accepted and even then under very strict conditions. The recent crash landing in Heathrow is a gleaming example of why a pilot trained to rely on his senses and experience when all the flashy gizmos go blank is always going to be wanted at the helm in the cockpit. If not for his training, the outcome would have been disastrous.
COCKPIT displays plunged into darkness, engines that throttle back during take-off and contradictory airspeed readings are just some of the problems caused in recent years by inexplicable failures in the software that controls aircraft.
So far there are no known cases of such failures alone causing an accident. Speculation that software problems led to the crash landing of the British Airways Boeing 777 at Heathrow airport, London, on 17 January remain unconfirmed. While software was implicated in the Korean Air jumbo jet crash in August 1997 on Guam, which killed 228 people, human error, not software design, was to blame. Software failures remain a risk, though, and with aircraft makers set to increase the proportion of aircraft functions controlled by software, experts are warning that they will become more frequent, increasing the chance that one will cause an accident.
In early aircraft, moving parts such as the rudder and wing flaps were linked to controls in the cockpit either by a system of cables and pulleys or by hydraulics. In the 1970s, aircraft makers realised they could get rid of much of this heavy equipment and replace it with lightweight wiring and systems driven by electric motors in the wings and tail. Such "fly-by-wire" (FBW) systems vastly increased fuel efficiency.
FBW had another advantage too: because it uses electrical signals, for the first time it allowed a computer to be placed between the pilot and the moving parts. The computers were programmed to modify the pilots' instructions in certain instances: for example, to stop them moving the rudder too far or too quickly and so damage the plane. They also allowed the plane's aerodynamics to be finely adjusted during flight in response to wind conditions, further improving fuel economy.
But the addition of software led to different problems. Some of these are documented in a report completed last year by the US National Academy of Sciences (NAS). It lists a number of instances in which software bugs caused frightening problems during flight. In one instance in August 2005, a computer in a Boeing 777 presented the pilot with contradictory reports of airspeed. It said the aircraft was going so fast it could be torn apart and at the same time that the plane was flying so slowly it would fail to generate enough lift to stay in the air. The pilots managed to control the aircraft nonetheless, but it was a stark illustration of what can go wrong.
In another instance in 2005, this time in an Airbus 319, the pilots' computerised flight and navigation displays as well as the autopilot, auto-throttle and radio all lost power simultaneously for 2 minutes. Another time, what the NAS calls "faulty logic in the software" meant that when the computer controlling fuel flow failed in an Airbus A340, the back-up systems were not turned on.
"The pilots' displays, autopilot, auto-throttle and radio all lost power for 2 minutes
"
Now Boeing is planning to get rid of the hydraulic wheel brakes on its 787 in favour of lighter electrically actuated ones and to shift from using pneumatic engine starters and wing de-icers to electrical ones. Airbus will also be adopting a "more electric" approach in its forthcoming A350, says the plane's marketing director. "The addition of more electric systems will mean even more computer control," says Martyn Thomas, a systems engineering consultant based in Bath, UK, and a member of the NAS panel that produced the software report last year. "It will mean more wires or shared data lines and so still more possibilities for errors to arise."
But Boeing disagrees. "Flight critical software and systems are isolated from the other systems, so the addition of electric systems doesn't add complexity to the separate fly-by-wire flight control system," says a senior avionics engineer at Boeing's Everett, Washington plant.
The software used to control additional electric systems may not be in the same package as the flight control system, but they still add to the overall amount of software that needs writing and verifying as safe for flight. And that has independent experts like Thomas worried.
Why do software bugs arise and why can't they be removed? Bugs are sections of code that start doing something different to what the programmer intended, usually when the code has to deal with circumstances the programmer didn't anticipate. All software is susceptible to bugs, so it must be tested under as many different circumstances as possible. Ideally, the bugs get discovered at this time and are removed before the software is actually used. This is very difficult in complex systems like aircraft because the number of possible scenarios - such as different combinations of air densities, engine temperatures and specific aerodynamics - is huge.
To test for bugs, most aircraft manufacturers use a set of guidelines called the DO-178B standard, which was created by the US-based Radio Technical Commission for Aeronautics, a collection of government, industry and academic organisations, and the European Organisation for Civil Aviation Equipment. Recognised by the US Federal Aviation Administration (FAA), the standard rates software on how seriously it would compromise safety were it to fail, and then recommends different levels of testing depending on that rating. The most rigorous "level A" test, reserved for software whose failure would cause a catastrophic event, is called "modified condition/decision coverage" (MCDC), and it places the software in as many situations as possible to see if it crashes or produces anomalous output.
But it isn't clear whether the MCDC test includes enough different conditions to provide any greater protection than the level B tests done on less safety-critical software. So in 2003, UK Ministry of Defence contractor Qinetiq ran both levels on a range of software that is deployed in military transport aircraft. The MCDC should pick out many more flaws than the level B tests, but the Qinetiq team found that there was "no significant difference" between them. "MCDC testing is not removing any significant numbers of bugs," says Thomas. "It highlights the fact that testing is a completely hopeless way of showing that software does not contain errors."
"The criteria currently used to evaluate the dependability of electronic systems for many safety-related uses are way too weak, way insufficient," says Peter Ladkin, a computer scientist specialising in safety engineering at the University of Bielefeld in Germany.
Instead of focusing on testing, Ladkin and Thomas want to see a change in the way safety-critical software is written. Neither Boeing nor Airbus responded to questions about exactly which programming languages their software systems are written in, but according to Les Dorr, spokesman for the FAA, which certifies US commercial software systems, it is a mixture of the languages C, C++, assembler and Ada, which was developed by the Pentagon. Some of those languages, such as C, allow programmers to write vague or ambiguous code, says Thomas, which is the kind of thing that often leads to bugs.
To solve this problem, he suggests using highly specialised computer languages that do not allow ambiguous software specifications to be written, and which mathematically verify software as the programmer is coding. Such languages include the B-Method, pioneered for use on computer-controlled sections of the Paris Metro, and SPARK, a version of Ada. These so-called "strongly-typed" languages and their compiler software have strict controls within them that make it very difficult for programmers to write vague or ambiguous code.
The NAS report also backs stricter controls on languages. "Safe programming languages... are likely to reduce the cost and difficulty of producing dependable software," it says.
The FAA agrees that an increase in the software control of planes "makes validation and verification of software more challenging" and is working to ensure that validation keeps pace with technical advances. But Thomas says their progress is too slow. "How long are we prepared to go on using tools we know are broken to develop software on which people's lives depend? No other engineering discipline would rely on tools that have dangerous faults."
Aviation - Learn more in our comprehensive special report.
From issue 2642 of New Scientist magazine, 11 February 2008, page 28-29