Google
 

Open Sound System
OSS 4.x Programmer's Guide

Do you have problems with sound/audio application development? Don't panic! Click here for help!

Audio fundamentals

In short computer digital audio means converting audio signals to a series of numbers and storing them in a computer for playback (sooner or later).

A typical audio card (or the audio subsystem of a sound card) consists of one or more of the following components:

Even the mixer functionality was listed above it actually doesn't belong to the audio programming chapter. Mixer programming will be explained in the "OSS Mixer Programming" programmin chapter. However the most commonly used mixer functions have now be included directly in the audio programming API so there is no need to access the mixer just to select the recording source or to alter the recording or playback level.

The MIDI functionality is completely independent from audio. It's covered in it's own chapter ("OSS MIDI Programming").

Audio fundamentals

OSS 4.0 differs from the earlier OSS versions (as well as the freeware clone versions based on them) because now the application doesn't any more be worried about any characteristics of the device. Instead it just tells OSS what kind of audio stream it wants to record or play. OSS will then take care of the rest and ensure that the device handles the stream correctly.

There are three key parameters of digital audio streams that are pretty much everything the application needs to set.

With OSS you can use the SNDCTL_DSP_CHANNELS, SNDCTL_DSP_SETFMT and SNDCTL_DSP_SPEED ioctl calls to set these parameters.

Which audio device to use

The audio devices in OSS are named as /dev/dsp0, /dev/dsp1, ..., /dev/dsp63. However in typical systems there are not that many devices available. The easiest way to find out what devices are available is using the ossinfo command. For example the ossinfo -a command produces the following output in one system:


Audio devices (/dev/dsp*)
0: M Audio Delta 1010LT out1/2 (audio port 0 of card 0)
1: M Audio Delta 1010LT out3/4 (audio port 2 of card 0)
2: M Audio Delta 1010LT out5/6 (audio port 4 of card 0)
3: M Audio Delta 1010LT out7/8 (audio port 6 of card 0)
4: M Audio Delta 1010LT S/PDIF out (audio port 8 of card 0)
5: M Audio Delta 1010LT in1/2 (audio port 10 of card 0)
6: M Audio Delta 1010LT in3/4 (audio port 12 of card 0)
7: M Audio Delta 1010LT in5/6 (audio port 14 of card 0)
8: M Audio Delta 1010LT in7/8 (audio port 16 of card 0)
9: M Audio Delta 1010LT S/PDIF in (audio port 18 of card 0)
10: M Audio Delta 1010LT input from mon. mixer (audio port 20 of card 0)
11: M Audio Delta 1010LT (all outputs) (audio port 0 of card 0)
12: M Audio Delta 1010LT (all inputs) (audio port 10 of card 0)
13: M Audio Delta TDIF out1/2 (audio port 0 of card 1)
14: M Audio Delta TDIF out3/4 (audio port 2 of card 1)
15: M Audio Delta TDIF out5/6 (audio port 4 of card 1)
16: M Audio Delta TDIF out7/8 (audio port 6 of card 1)
17: M Audio Delta TDIF S/PDIF out (audio port 8 of card 1)
18: M Audio Delta TDIF in1/2 (audio port 10 of card 1)
19: M Audio Delta TDIF in3/4 (audio port 12 of card 1)
20: M Audio Delta TDIF in5/6 (audio port 14 of card 1)
21: M Audio Delta TDIF in7/8 (audio port 16 of card 1)
22: M Audio Delta TDIF S/PDIF in (audio port 18 of card 1)
23: M Audio Delta TDIF input from mon. mixer (audio port 20 of card 1)
24: M Audio Delta TDIF (all outputs) (audio port 0 of card 1)
25: M Audio Delta TDIF (all inputs) (audio port 10 of card 1)

This system is a typical professional one that has two (or more) sound cards which each have multiple devices. Some of them are inputs and some others are outputs. There may also be devices than can be used in both directions. In addition there are some devices (11, 12, 24 and 25) that are redundant with some other ones (for example 11 is actually a multi channel device that handles the stereo channel pairs 0 to 4 together. It's possible to use devices 0 to 4 at the same time. However none of them can be used at the same time with 11.

There are more such nasty special cases and for this reason we do not recommend using any AI algorithm for selecting the devices automatically. Instead applications should shjow the available devices and let the user to select the ones to be used.

There are three possible device selection strategies:

  1. The application can use the SNDCTL_AUDIOINFO ioctl call to list the audio devices in the same way ossinfo.c does it. Then the user just picks the device from the list.
  2. The application can ask the user to give the device file name(s) (such as /dev/dsp3 using some command line option or environment variable. A very simple approach is using an environment variable such as MYAPP_AUDIODEV, MYAPP_AUDIOINPUT or MYAPP_AUDIOOUTPUT (replace MYAPP_ with the name of your application).]
  3. Use one of the default devices (see below). However it's recommended that such devices are only used as the "initial" values in the application config settings. The default device is common to all applications while many users want to use some device with given program while the remaining ones use the default. [!endenum

    The default devices

    Old OSS developers may have wondered why the /dev/dsp device was not mentioned above. The reason is simple. The purpose and usage of this device has changed slightly since the previous OSS versions.

    In early OSS (actually it was not called OSS at that time) versions there was only one audio device that was /dev/dsp. Later it became possible to have multiple audio devices in the same system and /dev/dsp1 was assigned to the second one and so on. Some Linux distributions still follow this naming scheme which may cause some compatibility problems with them.

    Years later the first device was renamed to /dev/dsp0 which is a logical solution. The /dev/dsp device was now created as a symbolic link that points to one of the "real" devices (/dev/dsp0 to /dev/dsp63) depending on the needs of the application.

    The above approach is still in use under most operating systems. However under Linux and Solaris it was possible to implement /dev/dsp as a very special special device. In these environments /dev/dsp is no longer a symbolic link the user should set. Instead the Bad xlink 'ossctl' program is now used to control the way how this device behaves.

    There are three different device lists that can be freely modified. If the first device on the list is free then it will be opened when some application tries to open /dev/dsp. However if the device was busy (used by some other application) then the next devices on the list will be tried until a free one is found. In this way it's possible to get multiple applications to do audio at the same time.

    Note that the default device logic doesn't work with applications that are going to use the mmap method for audio playback. The reason is that mmap application (in Linux at least) must open the device with O_RDWR instead of O_WRONLY. This makes the redirection logic to use the wrong list. This causes problems with some devices. Applications using mmap should use the "numbered" devices directly or the default mmap device (see below).

    Dedicated default devices

    OSS 4.0 creates some additional default devices for few common application types. These are just symbolic links in the current OS versions but they can be handled differently in the future.

    Use of these dedicated devices is recommended as the default devices in applications listed below. In this way the user can assign a different devices for this kind of special applications.

    These devices files don't exist in pre OSS 4.0 systems. The application can ask the user to create the right symbolic link if necessary. Alternatively it can just silently divert to /dev/dsp if the device is missing.

    Obsolete audio devices

    OSS also creates a bunch of /dev/audio and /dev/dspW device files. They are no longer part of oss and must in no case be used by OSS compatible programs. They are created just because some older applications may still depend on them.

    Writing a simple audio program

    Writing a simple audio playback or recording application is extremely easy. All you need to do is opening the right audio device, setting the three most fundamental parameters and then just reading or writing.

    The singen.c program is a good example of a program that does audio playback.

    Using OSS for more challenging purposes will be explained elsewhere in this manual. For example in the Some common types of audio programs section.



    Copyright (C) 4Front Technologies, 2007. All rights reserved.
    Back to index OSS web site