org.scalajs.dom.raw

OfflineAudioContext

class OfflineAudioContext extends AudioContext

The OfflineAudioContext interface is an AudioContext interface representing an audio-processing graph built from linked together AudioNodes. In contrast with a standard AudioContext, an OfflineAudioContext doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer.

It is important to note that, whereas you can create a new AudioContext using the new AudioContext() constructor with no arguments, the new OfflineAudioContext() constructor requires three arguments:

Annotations
@RawJSType() @native() @JSGlobal()
Example:
  1. new OfflineAudioContext(numOfChannels, length, sampleRate)

    This works in exactly the same way as when you create a new AudioBuffer with the AudioContext.createBuffer method. For more detail, read Audio buffers: frames, samples and channels from our Basic concepts guide.

Linear Supertypes
AudioContext, EventTarget, Object, Any, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. OfflineAudioContext
  2. AudioContext
  3. EventTarget
  4. Object
  5. Any
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new OfflineAudioContext(numOfChannels: Int, length: Int, sampleRate: Int)

    numOfChannels

    An integer representing the number of channels this buffer should have. Implementations must support a minimum 32 channels.

    length

    An integer representing the size of the buffer in sample-frames.

    sampleRate

    The sample-rate of the linear audio data in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000, with 44100 being the most commonly used.

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def addEventListener[T <: Event](type: String, listener: Function1[T, _], useCapture: Boolean = js.native): Unit

    The EventTarget.

    The EventTarget.addEventListener() method registers the specified listener on the EventTarget it's called on. The event target may be an Element in a document, the Document itself, a Window, or any other object that supports events (such as XMLHttpRequest).

    MDN

    Definition Classes
    EventTarget
  7. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  8. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. def close(): scala.scalajs.js.Promise[Unit]

    Closes the audio context, releasing any system audio resources that it uses.

    Closes the audio context, releasing any system audio resources that it uses.

    Definition Classes
    AudioContext
  10. def createAnalyser(): AnalyserNode

    Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.

    Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data visualisations.

    Definition Classes
    AudioContext
  11. def createBiquadFilter(): BiquadFilterNode

    Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.

    Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types: high-pass, low-pass, band-pass, etc.

    Definition Classes
    AudioContext
  12. def createBuffer(numOfChannels: Int, length: Int, sampleRate: Int): AudioBuffer

    Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.

    Creates a new, empty AudioBuffer object, which can then be populated by data and played via an AudioBufferSourceNode.

    numOfChannels

    An integer representing the number of channels this buffer should have. Implementations must support a minimum 32 channels.

    length

    An integer representing the size of the buffer in sample-frames.

    sampleRate

    The sample-rate of the linear audio data in sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000.

    Definition Classes
    AudioContext
  13. def createBufferSource(): AudioBufferSourceNode

    Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object.

    Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by AudioContext.decodeAudioData when it successfully decodes an audio track.

    Definition Classes
    AudioContext
  14. def createChannelMerger(numberOfInputs: Int = 6): ChannelMergerNode

    Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.

    Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio stream.

    numberOfInputs

    The number of channels in the input audio streams, which the output stream will contain; the default is 6 is this parameter is not specified.

    Definition Classes
    AudioContext
  15. def createChannelSplitter(numberOfOutputs: Int = 6): ChannelSplitterNode

    Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.

    Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately.

    numberOfOutputs

    The number of channels in the input audio stream that you want to output separately; the default is 6 is this parameter is not specified.

    Definition Classes
    AudioContext
  16. def createConvolver(): ConvolverNode

    Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.

    Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a reverberation effect.

    Definition Classes
    AudioContext
  17. def createDelay(maxDelayTime: Int): DelayNode

    Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount.

    Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also useful to create feedback loops in a Web Audio API graph.

    maxDelayTime

    The maximum amount of time, in seconds, that the audio signal can be delayed by. The default value is 0.

    Definition Classes
    AudioContext
  18. def createDynamicsCompressor(): DynamicsCompressorNode

    Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.

    Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal.

    Definition Classes
    AudioContext
  19. def createGain(): GainNode

    Creates a GainNode, which can be used to control the overall volume of the audio graph.

    Creates a GainNode, which can be used to control the overall volume of the audio graph.

    Definition Classes
    AudioContext
  20. def createMediaElementSource(myMediaElement: HTMLMediaElement): MediaElementAudioSourceNode

    Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement.

    Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate audio from <video> or <audio> elements.

    myMediaElement

    An HTMLMediaElement object that you want to feed into an audio processing graph to manipulate.

    Definition Classes
    AudioContext
  21. def createMediaStreamDestination(): MediaStreamAudioDestinationNode

    Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.

    Creates a MediaStreamAudioDestinationNode associated with a MediaStream representing an audio stream which may be stored in a local file or sent to another computer.

    Definition Classes
    AudioContext
  22. def createMediaStreamSource(stream: MediaStream): MediaStreamAudioSourceNode

    Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.

    Creates a MediaStreamAudioSourceNode associated with a MediaStream representing an audio stream which may come from the local computer microphone or other sources.

    stream

    A MediaStream object that you want to feed into an audio processing graph to manipulate.

    Definition Classes
    AudioContext
  23. def createOscillator(): OscillatorNode

    Creates an OscillatorNode, a source representing a periodic waveform.

    Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone.

    Definition Classes
    AudioContext
  24. def createPanner(): PannerNode

    Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.

    Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space.

    Definition Classes
    AudioContext
  25. def createPeriodicWave(real: Float32Array, imag: Float32Array): PeriodicWave

    Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.

    Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an OscillatorNode.

    Definition Classes
    AudioContext
  26. def createStereoPanner(): StereoPannerNode

    Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.

    Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source.

    Definition Classes
    AudioContext
  27. def createWaveShaper(): WaveShaperNode

    Creates a WaveShaperNode, which is used to implement non-linear distortion effects.

    Creates a WaveShaperNode, which is used to implement non-linear distortion effects.

    Definition Classes
    AudioContext
  28. def currentTime: Double

    Returns a double representing an ever-increasing hardware time in seconds used for scheduling.

    Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0 and cannot be stopped, paused or reset.

    Definition Classes
    AudioContext
  29. def decodeAudioData(audioData: ArrayBuffer, successCallback: Function1[AudioBuffer, _] = js.native, errorCallback: Function0[_] = js.native): scala.scalajs.js.Promise[AudioBuffer]

    Asynchronously decodes audio file data contained in an ArrayBuffer.

    Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only works on complete files, not fragments of audio files.

    audioData

    An ArrayBuffer containing the audio data to be decoded, usually grabbed from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer.

    successCallback

    A callback function to be invoked when the decoding successfully finishes. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data. Usually you'll want to put the decoded data into an AudioBufferSourceNode, from which it can be played and manipulated how you want.

    errorCallback

    An optional error callback, to be invoked if an error occurs when the audio data is being decoded.

    Definition Classes
    AudioContext
  30. val destination: AudioDestinationNode

    Returns an AudioDestinationNode representing the final destination of all audio in the context.

    Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought of as the audio-rendering device.

    Definition Classes
    AudioContext
  31. def dispatchEvent(evt: Event): Boolean

    Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order.

    Dispatches an Event at the specified EventTarget, invoking the affected EventListeners in the appropriate order. The normal event processing rules (including the capturing and optional bubbling phase) apply to events dispatched manually with dispatchEvent().

    MDN

    Definition Classes
    EventTarget
  32. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  33. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  34. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  35. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  36. def hasOwnProperty(v: String): Boolean

    Definition Classes
    Object
  37. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  38. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  39. def isPrototypeOf(v: Object): Boolean

    Definition Classes
    Object
  40. val listener: AudioListener

    Returns the AudioListener object, used for 3D spatialization.

    Returns the AudioListener object, used for 3D spatialization.

    Definition Classes
    AudioContext
  41. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  42. final def notify(): Unit

    Definition Classes
    AnyRef
  43. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  44. def propertyIsEnumerable(v: String): Boolean

    Definition Classes
    Object
  45. def removeEventListener[T <: Event](type: String, listener: Function1[T, _], useCapture: Boolean = js.native): Unit

    Removes the event listener previously registered with EventTarget.

    Removes the event listener previously registered with EventTarget.addEventListener.

    MDN

    Definition Classes
    EventTarget
  46. def resume(): scala.scalajs.js.Promise[Unit]

    Resumes the progression of time in an audio context that has previously been suspended.

    Resumes the progression of time in an audio context that has previously been suspended.

    Definition Classes
    AudioContext
  47. def startRendering(): scala.scalajs.js.Promise[AudioBuffer]

    The promise-based startRendering() method of the OfflineAudioContext Interface starts rendering the audio graph, taking into account the current connections and the current scheduled changes.

    The promise-based startRendering() method of the OfflineAudioContext Interface starts rendering the audio graph, taking into account the current connections and the current scheduled changes.

    When the method is invoked, the rendering is started and a promise is raised. When the rendering is completed, the promise resolves with an AudioBuffer containing the rendered audio.

  48. def state: String

    Returns the current state of the AudioContext.

    Returns the current state of the AudioContext.

    Definition Classes
    AudioContext
  49. def suspend(): scala.scalajs.js.Promise[Unit]

    Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.

    Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.

    Definition Classes
    AudioContext
  50. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  51. def toLocaleString(): String

    Definition Classes
    Object
  52. def toString(): String

    Definition Classes
    AnyRef → Any
  53. def valueOf(): Any

    Definition Classes
    Object
  54. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  55. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  56. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. var oncomplete: Function1[OfflineAudioCompletionEvent, _]

    Is an EventHandler called when the processing is terminated, that is when the complete event (of type OfflineAudioCompletionEvent) is raised.

    Is an EventHandler called when the processing is terminated, that is when the complete event (of type OfflineAudioCompletionEvent) is raised.

    Annotations
    @deprecated
    Deprecated

    (Since version forever) Use the promise version of OfflineAudioContext.startRendering instead.

Inherited from AudioContext

Inherited from EventTarget

Inherited from Object

Inherited from Any

Inherited from AnyRef

Inherited from Any

Ungrouped