Model behaviour

The technology your PC uses to make sound is usually based on replaying an audio sample. Brian Heywood looks at alternatives

Back in the early days of PC sound, the main player in computer sound synthesis was usually some variant of the Yamaha OPL chipset, which made a noise reminiscent of a wasp trapped in a jam jar. This chip was based on the FM synthesis algorithms originally used by Yamaha for its DX/TX digital music synthesizers. While the quality of the sound on these early cards was pretty poor compared to today's standards, it was good enough to propel Creative Labs from a small Singapore-based electronics company formed by Sim Wong Hoo in 1981 into a major corporate world player, which incidentally now owns a large chunk of the world's music synthesis development research expertise.

Without delving too deeply into synthesizer technology or history, FM synthesis was Yamaha's take on the leap from analog to digital synthesis. This leap was made possible by the application of DSP (Digital Signal Processing) technology, originally developed for military RADAR systems, to the audio domain. Yamaha's application of John M Chowning's algorithms, which he'd developed at Stanford University in the early 1970s, led to a family of affordable digital music synthesizers that more or less provided the soundtrack to the 1980s. When Yamaha packaged this technology into an OEM chipset, it probably had no idea that it was providing the raw material for an explosion in PC sound card manufacture. First Adlib, then Creative Labs, based their sound cards on this compact chipset and, along with the games authors, created a mass market for high-quality sound hardware.

The demand created in this new marketplace drove up standards as the hardware vendors tried to get a quality edge over their competitors. One way they did this was by either developing new technology or buying in expertise from the musical instrument manufacturers: for instance, Creative bought the sampler pioneer E-MU Systems in the early 1990s. Electronic musicians benefitted both by adopting the increasingly sophisticated PC-based sound generators for music production and by taking advantage of the decrease in price caused by the intense competition. Several of the traditional electronic musical instrument manufacturers also piled into the market, including Roland with its RAP-10 and Sound Canvas PC cards, and Yamaha with the SW1000XG. The bottom line was that by the mid-1990s, musicians and producers had access to audio tools that just a decade before were available only to the very rich or those who had the time and expertise to develop their own hardware and software.

So sound card technology moved onwards to sample-based replay, initially employing wavetable methods but with higher-quality systems using either ROM- or RAM-based sample players. The former contains short samples of different instrument sounds stored in the player's ROM, which are then looped and used like simple oscillators for the sustained portion of the instrument sound. The latter approach is essentially the same technology as found in professional music samplers in a recording studio: in fact, Creative Labs' subsidiary E-MU developed a standard, called SoundFont, which allows the core instrument sounds to be portable between different systems.

Getting Physical

Given the advances in processing power of DSP chips and the increasing expectations of both musicians and listeners, it isn't surprising that the technology would move on from simply replaying an existing audio sample to something more sophisticated. Again, Stanford University was at the forefront when a way was developed there to simplify the mathematical model of a stringed instrument to the point where current DSPs could perform the calculations required in real-time. Snappily entitled Digital Waveguide Synthesis, this concept was developed by Professor (and rock musician) Julius O Smith III from an idea he had on a bus in 1985 and resulted in the first Virtual Modelling synthesizer, the VL1, being launched by Yamaha in 1995. The same approach can also be used to model wind instruments, drums and even the human voice.

Pages