A helper class to provide common functionality for working with the Web Audio API. https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API A singleton instance of this class is available as game#audio.

See

Game#audio

Alias

game.audio

Properties

unlock: Promise<void>

A Promise which resolves once the game audio API is unlocked and ready to use.

sounds: Map<string, WeakRef<Sound>> = ...

The set of singleton Sound instances which are shared across multiple uses of the same sound path.

playing: Map<number, Sound> = ...

Get a map of the Sound objects which are currently playing.

pending: Function[] = []

A user gesture must be registered before audio can be played. This Array contains the Sound instances which are requested for playback prior to a gesture. Once a gesture is observed, we begin playing all elements of this Array.

See

Sound

locked: boolean = true

A flag for whether video playback is currently locked by awaiting a user gesture

music: AudioContext

A singleton audio context used for playback of music.

environment: AudioContext

A singleton audio context used for playback of environmental audio.

interface: AudioContext

A singleton audio context used for playback of interface sounds and effects.

buffers: AudioBufferCache = ...

A singleton cache used for audio buffers.

#analyserInterval: number

Interval ID as returned by setInterval for analysing the volume of streams When set to 0, means no timer is set.

#analyserStreams: Record<string, {
    stream: MediaStream;
    analyser: AnalyserNode;
    interval: number;
    callback: Function;
}> = {}

Map of all streams that we listen to for determining the decibel levels. Used for analyzing audio levels of each stream.

Type declaration

  • stream: MediaStream
  • analyser: AnalyserNode
  • interval: number
  • callback: Function
#fftArray: Float32Array = null

Fast Fourier Transform Array. Used for analysing the decibel level of streams. The array is allocated only once then filled by the analyser repeatedly. We only generate it when we need to listen to a stream's level, so we initialize it to null.

levelAnalyserNativeInterval: number = 50

The Native interval for the AudioHelper to analyse audio levels from streams Any interval passed to startLevelReports() would need to be a multiple of this value.

THRESHOLD_CACHE_SIZE_BYTES: number = ...

The cache size threshold after which audio buffers will be expired from the cache to make more room. 1 gigabyte, by default.

#analyzerContext: AudioContext

Audio Context singleton used for analysing audio levels of each stream Only created if necessary to listen to audio streams.

Accessors

  • get context(): AudioContext
  • For backwards compatibility, AudioHelper#context refers to the context used for music playback.

    Returns AudioContext

Methods

  • Create a Sound instance for a given audio source URL

    Parameters

    Returns Sound

  • Play a single Sound by providing its source.

    Parameters

    • src: string

      The file path to the audio source being played

    • Optional options: {
          context: AudioContext;
      } = {}

      Additional options which configure playback

      • context: AudioContext

        A specific AudioContext within which to play

    Returns Promise<Sound>

    The created Sound which is now playing

  • Register an event listener to await the first mousemove gesture and begin playback once observed.

    Returns Promise<void>

    The unlocked audio context

  • Request that other connected clients begin preloading a certain sound path.

    Parameters

    • src: string

      The source file path requested for preload

    Returns Promise<Sound>

    A Promise which resolves once the preload is complete

  • Returns a singleton AudioContext if one can be created. An audio context may not be available due to limited resources or browser compatibility in which case null will be returned

    Returns AudioContext

    A singleton AudioContext or null if one is not available

  • Registers a stream for periodic reports of audio levels. Once added, the callback will be called with the maximum decibel level of the audio tracks in that stream since the last time the event was fired. The interval needs to be a multiple of AudioHelper.levelAnalyserNativeInterval which defaults at 50ms

    Parameters

    • id: string

      An id to assign to this report. Can be used to stop reports

    • stream: MediaStream

      The MediaStream instance to report activity on.

    • callback: Function

      The callback function to call with the decibel level. callback(dbLevel)

    • Optional interval: number = 50

      The interval at which to produce reports.

    • Optional smoothing: number = 0.1

      The smoothingTimeConstant to set on the audio analyser.

    Returns boolean

    Returns whether listening to the stream was successful

  • Stop sending audio level reports This stops listening to a stream and stops sending reports. If we aren't listening to any more streams, cancel the global analyser timer.

    Parameters

    • id: string

      The id of the reports that passed to startLevelReports.

    Returns void

  • Log a debugging message if the audio debugging flag is enabled.

    Parameters

    • message: string

      The message to log

    Returns void

  • Ensures the global analyser timer is started

    We create only one timer that runs every 50ms and only create it if needed, this is meant to optimize things and avoid having multiple timers running if we want to analyse multiple streams at the same time. I don't know if it actually helps much with performance but it's expected that limiting the number of timers running at the same time is good practice and with JS itself, there's a potential for a timer congestion phenomenon if too many are created.

    Returns void

  • Cancel the global analyser timer If the timer is running and has become unnecessary, stops it.

    Returns void

  • Capture audio level for all speakers and emit a webrtcVolumes custom event with all the volume levels detected since the last emit. The event's detail is in the form of {userId: decibelLevel}

    Returns void

  • Private

    Handle the first observed user gesture

    Parameters

    • event: Event

      The mouse-move event which enables playback

    • resolve: Function

      The Promise resolution function

    Returns any

  • Test whether a source file has a supported audio extension type

    Parameters

    • src: string

      A requested audio source path

    Returns boolean

    Does the filename end with a valid audio extension?

  • Given an input file path, determine a default name for the sound based on the filename

    Parameters

    • src: string

      An input file path

    Returns string

    A default sound name for the path

  • Register client-level settings for global volume controls.

    Returns void

  • Open socket listeners which transact ChatMessage data

    Parameters

    • socket: any

    Returns void

  • Play a one-off sound effect which is not part of a Playlist

    Parameters

    • data: {
          src: string;
          channel: string;
          volume: number;
          autoplay: boolean;
          loop: boolean;
      }

      An object configuring the audio data to play

      • src: string

        The audio source file path, either a public URL or a local path relative to the public directory

      • channel: string

        An audio channel in CONST.AUDIO_CHANNELS where the sound should play

      • volume: number

        The volume level at which to play the audio, between 0 and 1.

      • autoplay: boolean

        Begin playback of the audio effect immediately once it is loaded.

      • loop: boolean

        Loop the audio effect and continue playing it until it is manually stopped.

    • socketOptions: boolean | object

      Options which only apply when emitting playback over websocket. As a boolean, emits (true) or does not emit (false) playback to all other clients As an object, can configure which recipients should receive the event.

    Returns Sound

    A Sound instance which controls audio playback.

    Example: Play the sound of a locked door for all players

    AudioHelper.play({src: "sounds/lock.wav", volume: 0.8, loop: false}, true);
    
  • Begin loading the sound for a provided source URL adding its

    Parameters

    • src: string

      The audio source path to preload

    Returns Promise<Sound>

    The created and loaded Sound ready for playback

  • Returns the volume value based on a range input volume control's position. This is using an exponential approximation of the logarithmic nature of audio level perception

    Parameters

    • value: string | number

      Value between [0, 1] of the range input

    • Optional order: number = 1.5

      The exponent of the curve

    Returns number

  • Counterpart to inputToVolume() Returns the input range value based on a volume

    Parameters

    • volume: number

      Value between [0, 1] of the volume level

    • Optional order: number = 1.5

      The exponent of the curve

    Returns number

  • Handle changes to the global music volume slider.

    Parameters

    • volume: number

    Returns void

  • Handle changes to the global environment volume slider.

    Parameters

    • volume: number

    Returns void

  • Handle changes to the global interface volume slider.

    Parameters

    • volume: number

    Returns void

  • Create an AudioContext with an attached GainNode for master volume control.

    Parameters

    • volumeSetting: any

    Returns AudioContext