Sound

ADPCMCodec
This is a simple ADPCM (adapative delta pulse code modulation) codec. This is a general audio codec that compresses speech, music, or sound effects equally well, and works at any sampling rate (i.e., it contains no frequency-sensitive filters). It compresses 16-bit sample data down to 5, 4, 3, or 2 bits per sample, with lower fidelity and increased noise at the lowest bit rates. Although it does not deliver state-of-the-art compressions, the algorithm is small, simple, and extremely fast, since the encode/decode primitives have been translated into C primitives.
This codec will also encode and decode all Flash .swf file compressed sound formats, both mono and stereo. (Note: stereo Flash compression is not yet implemented, but stereo decompression works.)
bytesPerEncodedFrame
Answer the number of bytes required to hold one frame of compressed sound data.
compressAndDecompress:
Compress and decompress the given sound. Overridden to use same bits per sample for both compressing and decompressing.
decode:bitsPerSample:
decode:sampleCount:bitsPerSample:frameSize:stereo:
decodeFlash:sampleCount:stereo:
decodeFrames:from:at:into:at:
Decode the given number of monophonic frames starting at the given index in the given ByteArray of compressed sound data and storing the decoded samples into the given SoundBuffer starting at the given destination index. Answer a pair containing the number of bytes of compressed data consumed and the number of decompressed samples produced.
encode:bitsPerSample:
encodeFlashLeft:right:bitsPerSample:
encodeFrames:from:at:into:at:
Encode the given number of frames starting at the given index in the given monophonic SoundBuffer and storing the encoded sound data into the given ByteArray starting at the given destination index. Encode only as many complete frames as will fit into the destination. Answer a pair containing the number of samples consumed and the number of bytes of compressed data produced.
encodeLeft:right:bitsPerSample:frameSize:forFlash:
headerBitsForSampleCount:stereoFlag:
Answer the number of extra header bits required for the given number of samples. This will be zero if I am not using frame headers.
indexForDeltaFrom:to:
Answer the best index to use for the difference between the given samples.
initializeForBitsPerSample:samplesPerFrame:
new
newBitsPerSample:
nextBits:
Answer the next n bits of my bit stream as an unsigned integer.
nextBits:put:
Write the next n bits to my bit stream.
privateDecodeMono:
privateDecodeStereo:
privateEncodeMono:
privateEncodeStereo:
not yet implemented
reset
Reset my encoding and decoding state. Optional. This default implementation does nothing.
resetForMono
Reset my encoding and decoding state for mono.
resetForStereo
Reset my encoding and decoding state for stereo.
samplesPerFrame
Answer the number of sound samples per compression frame.
translatedPrimitives
AIFFFileReader
I am a parser for AIFF (audio interchange file format) files. I can read uncompressed 8-bit and 16-bit mono, stereo, or multichannel AIFF files. I read the marker information used by the TransferStation utility to mark the loop points in sounds extracted from commercial sampled-sound CD-ROMs.
bitsPerSample
channelCount
channelData
channelDataOffset
edit
frameCount
gain
isLooped
isStereo
leftSamples
loopEnd
loopLength
markers
pitch
pitchForKey:
Convert my MIDI key number to a pitch and return it.
readChunk:size:
Read a AIFF chunk of the given type. Skip unrecognized chunks. Leave the input stream positioned chunkSize bytes past its position when this method is called.
readCommonChunk:
Read a COMM chunk. All AIFF files have exactly one chunk of this type.
readExtendedFloat
Read and answer an Apple extended-precision 80-bit floating point number from the input stream.
readFrom:
Read AIFF data from the given binary stream.
readFromFile:
Read the AIFF file of the given name.
readFromFile:mergeIfStereo:skipDataChunk:
Read the AIFF file of the given name. See comment in readFromStream:mergeIfStereo:skipDataChunk:.
readFromStream:mergeIfStereo:skipDataChunk:
Read an AIFF file from the given binary stream. If mergeFlag is true and the file contains stereo data, then the left and right channels will be mixed together as the samples are read in. If skipDataFlag is true, then the data chunk to be skipped; this allows the other chunks of a file to be processed in order to extract format information quickly without reading the data.
readInstrumentChunk:
readMarkerChunk:
readMergedStereoChannelDataFrom:
Read stereophonic channel data from the given stream, mixing the two channels to create a single monophonic channel. Each frame contains two samples.
readMonoChannelDataFrom:
Read monophonic channel data from the given stream. Each frame contains a single sample.
readMultiChannelDataFrom:
Read multi-channel data from the given stream. Each frame contains channelCount samples.
readSamplesChunk:
Read a SSND chunk. All AIFF files with a non-zero frameCount contain exactly one chunk of this type.
readStereoChannelDataFrom:
Read stereophonic channel data from the given stream. Each frame contains two samples.
rightSamples
samplingRate
sound
Answer the sound represented by this AIFFFileReader. This method should be called only after readFrom: has been done.
AbstractScoreEvent
Abstract class for timed events in a MIDI score.
adjustTimeBy:
endTime
Subclasses should override to return the ending time if the event has some duration.
isControlChange
isNoteEvent
isPitchBend
isProgramChange
isTempoEvent
outputOnMidiPort:
Output this event to the given MIDI port. This default implementation does nothing.
time
time:
AbstractSound
An AbstractSound is xxxxxxxxx.
Instance Variables
envelopes: <Object>
mSecsSinceStart: <Object>
samplesUntilNextControl: <Object>
scaledVol: <Object>
scaledVolIncr: <Object>
scaledVolLimit: <Object>
envelopes
- xxxxx
mSecsSinceStart
- xxxxx
samplesUntilNextControl
- xxxxx
scaledVol
- xxxxx
scaledVolIncr
- xxxxx
scaledVolLimit
- xxxxx
+
Return the mix of the receiver and the argument sound.
,
Return the concatenation of the receiver and the argument sound.
addEnvelope:
Add the given envelope to my envelopes list.
adjustVolumeTo:overMSecs:
Adjust the volume of this sound to the given volume, a number in the range [0.0..1.0], over the given number of milliseconds. The volume will be changed a little bit on each sample until the desired volume is reached.
asSampledSound
Answer a SampledSound containing my samples. If the receiver is some kind of sampled sound, the resulting SampledSound will have the same original sampling rate as the receiver.
asSound
bachFugue
bachFugueOn:
bachFugueVoice1On:
bachFugueVoice2On:
bachFugueVoice3On:
bachFugueVoice4On:
busySignal:
chromaticPitchesFrom:
chromaticRunFrom:to:on:
chromaticScale
chromaticScaleOn:
computeSamplesForSeconds:
Compute the samples of this sound without outputting them, and return the resulting buffer of samples.
controlRate
Answer the number of control changes per second.
copy
A sound should copy all of the state needed to play itself, allowing two copies of a sound to play at the same time. These semantics require a recursive copy but only down to the level of immutable data. For example, a SampledSound need not copy its sample buffer. Subclasses overriding this method should include a resend to super.
copyEnvelopes
Private! Support for copying. Copy my envelopes.
default
delayedBy:
Return a composite sound consisting of a rest for the given amount of time followed by the receiver.
dial:
dialTone:
doControl
Update the control parameters of this sound using its envelopes, if any.
dur:
duration:
Scale my envelopes to the given duration. Subclasses overriding this method should include a resend to super.
envelopes
Return my collection of envelopes.
fileInSoundLibrary
fileInSoundLibraryNamed:
fileOutSoundLibrary
fileOutSoundLibrary:
hangUpWarning:
hiMajorScale
hiMajorScaleOn:
indexOfBottomOctavePitch:
initSounds
initialVolume:
Set the initial volume of this sound to the given volume, a number in the range [0.0..1.0].
initialize
Subclasses should redefine this method to perform initializations on instance creation
internalizeModulationAndRatio
Overridden by FMSound. This default implementation does nothing.
isPlaying
Return true if the receiver is currently playing
isStereo
Answer true if this sound has distinct left and right channels. (Every sound plays into a stereo sample buffer, but most sounds, which produce exactly the same samples on both channels, are not stereo.)
loudness
Answer the current volume setting for this sound.
loudness:
Initialize my volume envelopes and initial volume. Subclasses overriding this method should include a resend to super.
lowMajorScale
lowMajorScaleOn:
majorChord
majorChordOn:from:
majorPitchesFrom:
majorScale
majorScaleOn:
majorScaleOn:from:
majorScaleOn:from:octaves:
midiKeyForPitch:
millisecondsSinceStart
mixSampleCount:into:startingAt:leftVol:rightVol:
Mix the given number of samples with the samples already in the given buffer starting at the given index. Assume that the buffer size is at least (index + count) - 1. The leftVol and rightVol parameters determine the volume of the sound in each channel, where 0 is silence and ScaleFactor is full volume.
nameOrNumberToPitch:
Answer the pitch in cycles/second for the given pitch specification. The specification can be either a numeric pitch or pitch name such as 'c4'.
noteSequenceOn:from:
originalSamplingRate
For sampled sounds, answer the sampling rate used to record the stored samples. For other sounds, this is the same as the playback sampling rate.
pause
Pause this sound. It can be resumed from this point, or reset and resumed to start from the beginning.
pitch:dur:loudness:
pitchForMIDIKey:
pitchForName:
pitchTable
play
Play this sound to the sound output port in real time.
playAndWaitUntilDone
Play this sound to the sound ouput port and wait until it has finished playing before returning.
playChromaticRunFrom:to:
Play a fast chromatic run between the given pitches. Useful for auditioning a sound.
playSampleCount:into:startingAt:
Mix the next n samples of this sound into the given buffer starting at the given index. Update the receiver's control parameters periodically.
playSilently
Compute the samples of this sound without outputting them. Used for performance analysis.
playSilentlyUntil:
Compute the samples of this sound without outputting them. Used to fast foward to a particular starting time. The start time is given in seconds.
removeAllEnvelopes
Remove all envelopes from my envelopes list.
removeEnvelope:
Remove the given envelope from my envelopes list.
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
resumePlaying
Resume playing this sound from where it last stopped.
samples
Answer a monophonic sample buffer containing my samples. The left and write channels are merged.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
samplingRate
Answer the sampling rate in samples per second.
scaleFactor
scaleTest
setPitch:dur:loudness:
Initialize my envelopes for the given parameters. Subclasses overriding this method should include a resend to super.
soundForMidiKey:dur:loudness:
Answer an initialized sound object (a copy of the receiver) that generates a note for the given MIDI key (in the range 0..127), duration (in seconds), and loudness (in the range 0.0 to 1.0).
soundForPitch:dur:loudness:
Answer an initialized sound object (a copy of the receiver) that generates a note of the given pitch, duration, and loudness. Pitch may be a numeric pitch or a string pitch name such as 'c4'. Duration is in seconds and loudness is in the range 0.0 to 1.0.
soundNamed:
soundNamed:ifAbsent:
soundNamed:put:
soundNames
sounds
Allows simple sounds to behave as, eg, sequential sounds
stereoBachFugue
stopAfterMSecs:
Terminate this sound this note after the given number of milliseconds. This default implementation does nothing.
stopGracefully
End this note with a graceful decay. If the note has envelopes, determine the decay time from its envelopes.
storeAIFFOnFileNamed:
Store this sound as a AIFF file of the given name.
storeAIFFSamplesOn:
Store this sound as a 16-bit AIFF file at the current SoundPlayer sampling rate. Store both channels if self isStereo is true; otherwise, store the left channel only as a mono sound.
storeExtendedFloat:on:
Store an Apple extended-precision 80-bit floating point number on the given stream.
storeFiledInSound:named:
storeSample:in:at:leftVol:rightVol:
This method is provided for documentation. To gain 10% more speed when running sound generation in Smalltalk, this method is hand-inlined into all sound generation methods that use it.
storeSampleCount:bigEndian:on:
Store my samples on the given stream at the current SoundPlayer sampling rate. If bigFlag is true, then each 16-bit sample is stored most-significant byte first (AIFF files), otherwise it is stored least-significant byte first (WAV files). If self isStereo is true, both channels are stored, creating a stereo file. Otherwise, only the left channel is stored, creating a mono file.
storeSunAudioOnFileNamed:
Store this sound as an uncompressed Sun audio file of the given name.
storeSunAudioSamplesOn:
Store this sound as a 16-bit Sun audio file at the current SoundPlayer sampling rate. Store both channels if self isStereo is true; otherwise, store the left channel only as a mono sound.
storeWAVOnFileNamed:
Store this sound as a 16-bit Windows WAV file of the given name.
storeWAVSamplesOn:
Store this sound as a 16-bit Windows WAV file at the current SoundPlayer sampling rate. Store both channels if self isStereo is true; otherwise, store the left channel only as a mono sound.
testFMInteractively
translatedPrimitives
unloadSampledTimbres
unloadSoundNamed:
unloadedSound
updateFMSounds
updateScorePlayers
updateVolume
Increment the volume envelope of this sound. To avoid clicks, the volume envelope must be interpolated at the sampling rate, rather than just at the control rate like other envelopes. At the control rate, the volume envelope computes the slope and next target volume volume for the current segment of the envelope (i.e., it sets the rate of change for the volume parameter). When that target volume is reached, incrementing is stopped until a new increment is set.
volumeEnvelopeScaledTo:
Return a collection of values representing my volume envelope scaled by the given point. The scale point's x component is pixels/second and its y component is the number of pixels for full volume.
AmbientEvent
An AmbientEvent is xxxxxxxxx.
Instance Variables
arguments: <Object>
morph: <Object>
selector: <Object>
target: <Object>
arguments
- xxxxx
morph
- xxxxx
selector
- xxxxx
target
- xxxxx
morph
morph:
occurAtTime:inScorePlayer:atIndex:inEventTrack:secsPerTick:
target:selector:arguments:
BaseSoundSystem
This is the normal sound system in Squeak and is registered in SoundService - an AppRegistry - so that a small highlevel protocol for playing sounds can be used in a pluggable fashion.
More information available in superclass.
beep
There is sound support, so we use the default
sampled sound for a beep.
initialize
Subclasses should redefine this method to perform initializations on instance creation
playSampledSound:rate:
playSoundNamed:
There is sound support, so we play the given sound.
playSoundNamed:ifAbsentReadFrom:
playSoundNamedOrBeep:
There is sound support, so we play the given sound
instead of beeping.
randomBitsFromSoundInput:
Answer a positive integer with the given number of random bits of 'noise' from a sound input source. Typically, one would use a microphone or line input as the sound source, although many sound cards have enough thermal noise that you get random low-order sample bits even with no microphone connected. Only the least signficant bit of the samples is used. Since not all sound cards support 16-bits of sample resolution, we use the lowest bit that changes.
sampledSoundChoices
shutDown
Default is to do nothing.
soundNamed:
unload
CompressedSoundData
Instances of this class hold the data resulting from compressing a sound. Each carries a reference to the codec class that created it, so that it can reconstruct a sound similar to the original in response to the message asSound.
In order to facilitate integration with existing sounds, a CompressedSoundData instance can masquerade as a sound by caching a copy of its original sound and delegating the essential sound-playing protocol to that cached copy. It should probably be made a subclass of AbstractSound to complete the illusion.
asSound
Answer the result of decompressing the receiver.
channels
Answer an array of ByteArrays containing the compressed sound data for each channel.
channels:
codecName
Answer the name of the sound codec used to compress this sound. Typically, this is the name of a class that can be used to decode the sound, but it is possible that the codec has not yet been implemented or is not filed into this image.
codecName:
compressWith:
compressWith:atRate:
doControl
firstSample
Answer the firstSample of the original sound.
firstSample:
gain
Answer the gain of the original sound.
gain:
loopEnd
Answer index of the last sample of the loop, or nil if the original sound was not looped.
loopEnd:
loopLength
Answer length of the loop, or nil if the original sound was not looped.
loopLength:
mixSampleCount:into:startingAt:leftVol:rightVol:
perceivedPitch
Answer the perceived pitch of the original sound. By convention, unpitched sounds (like drum hits) are given an arbitrary pitch of 100.0.
perceivedPitch:
reset
This message is the cue to start behaving like a real sound in order to be played.
We do this by caching a decompressed version of this sound.
See also samplesRemaining.
samples
samplesRemaining
This message is the cue that the cached sound may no longer be needed.
We know it is done playing when samplesRemaining=0.
samplingRate
Answer the samplingRate of the original sound.
samplingRate:
soundClassName
Answer the class name of the uncompressed sound.
soundClassName:
ControlChangeEvent
A ControlChangeEvent is xxxxxxxxx.
Instance Variables
channel: <Object>
control: <Object>
value: <Object>
channel
- xxxxx
control
- xxxxx
value
- xxxxx
channel
channel:
control
control:
control:value:channel:
isControlChange
outputOnMidiPort:
Output this event to the given MIDI port.
printOn:
Append to the argument, aStream, a sequence of characters that
identifies the receiver.
printOnStream:
value
value:
Envelope
An envelope models a three-stage progression for a musical note: attack, sustain, decay. Envelopes can either return the envelope value at a given time or can update some target object using a client-specified message selector.
The points instance variable holds an array of (time, value) points, where the times are in milliseconds. The points array must contain at least two points. The time coordinate of the first point must be zero and the time coordinates of subsequent points must be in ascending order, although the spacing between them is arbitrary. Envelope values between points are computed by linear interpolation.
The scale slot is initially set so that the peak of envelope matches some note attribute, such as its loudness. When entering the decay phase, the scale is adjusted so that the decay begins from the envelope's current value. This avoids a potential sharp transient when entering the decay phase.
The loopStartIndex and loopEndIndex slots contain the indices of points in the points array; if they are equal, then the envelope holds a constant value for the sustain phase of the note. Otherwise, envelope values are computed by repeatedly looping between these two points.
The loopEndMSecs slot can be set in advance (as when playing a score) or dynamically (as when responding to interactive inputs from a MIDI keyboard). In the latter case, the value of scale is adjusted to start the decay phase with the current envelope value. Thus, if a note ends before its attack is complete, the decay phase is started immediately (i.e., the attack phase is never completed).
For best results, amplitude envelopes should start and end with zero values. Otherwise, the sharp transient at the beginning or end of the note may cause audible clicks or static. For envelopes on other parameters, this may not be necessary.
attackTime
Return the time taken by the attack phase.
centerPitch:
Set the center pitch of a pitch-controlling envelope. This default implementation does nothing.
checkParameters
Verify that the point array, loopStartIndex, and loopStopIndex obey the rules.
computeIncrementAt:between:and:scale:
Compute the current and increment values for the given time between the given inflection points.
computeValueAtMSecs:
Return the value of this envelope at the given number of milliseconds from its onset. Return zero for times outside the time range of this envelope.
decayEndIndex
decayTime
Return the time taken by the decay phase.
duration
Return the time of the final point.
duration:
Set the note duration to the given number of seconds.
example
exponentialDecay:
incrementalComputeValueAtMSecs:
Compute the current value, per-step increment, and the time of the next inflection point.
indexOfPointAfterMSecs:startingAt:
Return the index of the first point whose time is greater that mSecs, starting with the given index. Return nil if mSecs is after the last point's time.
interpolate:between:and:
Return the scaled, interpolated value for the given time between the given time points.
loopEndIndex
loopStartIndex
name
Answer a name for the receiver. This is used generically in the title of certain inspectors, such as the referred-to inspector, and specificially by various subsystems. By default, we let the object just print itself out..
points
points:loopStart:loopEnd:
reset
Reset the state for this envelope.
scale
scale:
setPoints:loopStart:loopEnd:
storeOn:
Append to the argument aStream a sequence of characters that is an
expression whose evaluation creates an object similar to the receiver.
sustainEnd:
Set the ending time of the sustain phase of this envelope; the decay phase will start this point. Typically derived from a note's duration.
target
target:
updateSelector
updateSelector:
updateTargetAt:
Send my updateSelector to the given target object with the value of this envelope at the given number of milliseconds from its onset. Answer true if the value changed.
valueAtMSecs:
Return the value of this envelope at the given number of milliseconds from its onset. Return zero for times outside the time range of this envelope.
volume:
Set the maximum volume of a volume-controlling envelope. This default implementation does nothing.
FFT
This class implements the Fast Fourier Transform roughly as described on page 367
of "Theory and Application of Digital Signal Processing" by Rabiner and Gold.
Each instance caches tables used for transforming a given size (n = 2^nu samples) of data.
It would have been cleaner using complex numbers, but often the data is all real.
imagData
initializeHammingWindow:
Initialize the windowing function to the generalized Hamming window. See F. Richard Moore, Elements of Computer Music, p. 100. An alpha of 0.54 gives the Hamming window, 0.5 gives the hanning window.
initializeTriangularWindow
Initialize the windowing function to the triangular, or Parzen, window. See F. Richard Moore, Elements of Computer Music, p. 100.
n
new:
nu:
Initialize variables and tables for transforming 2^nu points
permuteData
plot:in:
Throw-away code just to check out a couple of examples
pluginPrepareData
The FFT plugin requires data to be represented in WordArrays or FloatArrays
pluginTest
Display restoreAfter: [(FFT new nu: 12) pluginTest].
pluginTransformData:
Plugin testing -- if the primitive is not implemented
or cannot be found run the simulation. See also: FFTPlugin
realData
realData:
realData:imagData:
samplesPerCycleForIndex:
Answer the number of samples per cycle corresponding to a power peak at the given index. Answer zero if i = 1, since an index of 1 corresponds to the D.C. component.
scaleData
Scale all elements by 1/n when doing inverse
setSize:
Initialize variables and tables for performing an FFT on the given number of samples. The number of samples must be an integral power of two (e.g. 1024). Prepare data for use with the fast primitive.
test
Display restoreAfter: [(FFT new nu: 8) test]. -- Test on an array of 256 samples
transformDataFrom:startingAt:
Forward transform a block of real data taken from from the given indexable collection starting at the given index. Answer a block of values representing the normalized magnitudes of the frequency components.
transformForward:
FMBassoonSound
A FMBassoonSound is xxxxxxxxx.
Instance Variables
setPitch:dur:loudness:
Select a modulation ratio and modulation envelope scale based on my pitch.
FMClarinetSound
A FMClarinetSound is xxxxxxxxx.
Instance Variables
setPitch:dur:loudness:
Select a modulation ratio and modulation envelope scale based on my pitch.
FMSound
A FMSound is xxxxxxxxx.
Instance Variables
count: <Object>
initialCount: <Object>
modulation: <Object>
multiplier: <Object>
normalizedModulation: <Object>
scaledIndex: <Object>
scaledIndexIncr: <Object>
scaledOffsetIndex: <Object>
scaledOffsetIndexIncr: <Object>
scaledWaveTableSize: <Object>
waveTable: <Object>
count
- xxxxx
initialCount
- xxxxx
modulation
- xxxxx
multiplier
- xxxxx
normalizedModulation
- xxxxx
scaledIndex
- xxxxx
scaledIndexIncr
- xxxxx
scaledOffsetIndex
- xxxxx
scaledOffsetIndexIncr
- xxxxx
scaledWaveTableSize
- xxxxx
waveTable
- xxxxx
bass1
bassoon1
brass1
brass2
clarinet
clarinet2
default
duration
duration:
Scale my envelopes to the given duration. Subclasses overriding this method should include a resend to super.
flute1
flute2
initialize
Subclasses should redefine this method to perform initializations on instance creation
internalizeModulationAndRatio
Recompute the internal state for the modulation index and frequency ratio relative to the current pitch.
marimba
mellowBrass
mixSampleCount:into:startingAt:leftVol:rightVol:
Play samples from a wave table by stepping a fixed amount through the table on every sample. The table index and increment are scaled to allow fractional increments for greater pitch accuracy.
modulation
Return the FM modulation index.
modulation:
Set the FM modulation index. Typical values range from 0 (no modulation) to 5, although values up to about 10 are sometimes useful.
modulation:multiplier:
For backward compatibility. Needed to read old .fmp files.
modulation:ratio:
Set the modulation index and carrier to modulation frequency ratio for this sound, and compute the internal state that depends on these parameters.
multiplier
oboe1
oboe2
organ1
pitch
pitch:
Warning: Since the modulation and ratio are relative to the current pitch, some internal state must be recomputed when the pitch is changed. However, for efficiency during envelope processing, this compuation will not be done until internalizeModulationAndRatio is called.
pluckedElecBass
randomWeird1
randomWeird2
ratio
Return the FM modulation to carrier frequency ratio.
ratio:
Set the FM modulation to carrier frequency ratio.
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
setPitch:dur:loudness:
(FMSound pitch: 'a4' dur: 2.5 loudness: 0.4) play
setWavetable:
(AbstractSound lowMajorScaleOn: (FMSound new setWavetable: AA)) play
sineTable
stopAfterMSecs:
Terminate this sound this note after the given number of milliseconds.
storeOn:
Append to the argument aStream a sequence of characters that is an
expression whose evaluation creates an object similar to the receiver.
FWT
This class implements the Fast Wavelet Transform. It follows Mac Cody's article in Dr. Dobb's Journal, April 1992. See also...
http://www.dfw.net/~mcody/fwt/fwt.html
Notable features of his implementation include...
1. The ability to generate a large family of wavelets (including the Haar (alpha=beta) and Daubechies) from two parameters, alpha and beta, which range between -pi and pi.
2. All data arrays have 5 elements added on to allow for convolution overrun with filters up to 6 in length (the max for this implementation).
3. After a forward transform, the detail coefficients of the deomposition are found in transform at: 2*i, for i = 1, 2, ... nLevels; and the approximation coefficients are in transform at: (2*nLevels-1). these together comprise the complete wavelet transform.
The following changes from cody's listings should also be noted...
1. The three DotProduct routines have been merged into one.
2. The four routines WaveletDecomposition, DecomposeBranches, WaveletReconstruction, ReconstructBranches have all been merged into transformForward:.
3. All indexing follows the Smalltalk 1-to-N convention, naturally.
coeffs
Return all coefficients needed to reconstruct the original samples
coeffs:
Initialize this instance from the given coeff array (including header).
convolveAndDec:dataLen:filter:out:
convolve the input sequence with the filter and decimate by two
convolveAndInt:dataLen:filter:sumOutput:into:
insert zeros between each element of the input sequence and
convolve with the filter to interpolate the data
doWaveDemo
FWT new doWaveDemo
dotpData:endIndex:filter:start:stop:inc:
meanSquareError:
Return the mean-square error between the current sample array and
some other data, presumably to evaluate a compression scheme.
nSamples:nLevels:
Initialize a wavelet transform.
samples
samples:
setAlpha:beta:
Set alpha and beta, compute wavelet coeefs, and derive hFilter and lFilter
transformForward:
viewPhiAndPsi
(FWT new nSamples: 256 nLevels: 6) viewPhiAndPsi
GSMCodec
A GSMCodec is xxxxxxxxx.
Instance Variables
decodeState: <Object>
encodeState: <Object>
decodeState
- xxxxx
encodeState
- xxxxx
bytesPerEncodedFrame
Answer the number of bytes required to hold one frame of compressed sound data. Answer zero if this codec produces encoded frames of variable size.
decodeFrames:from:at:into:at:
Decode the given number of monophonic frames starting at the given index in the given ByteArray of compressed sound data and storing the decoded samples into the given SoundBuffer starting at the given destination index. Answer a pair containing the number of bytes of compressed data consumed and the number of decompressed samples produced.
encodeFrames:from:at:into:at:
Encode the given number of frames starting at the given index in the given monophonic SoundBuffer and storing the encoded sound data into the given ByteArray starting at the given destination index. Encode only as many complete frames as will fit into the destination. Answer a pair containing the number of samples consumed and the number of bytes of compressed data produced.
new
primDecode:frames:from:at:into:at:
primEncode:frames:from:at:into:at:
primNewState
reset
Reset my encoding/decoding state to prepare to encode or decode a new sound stream.
samplesPerFrame
Answer the number of sound samples per compression frame.
LoopedSampledSound
I respresent a sequence of sound samples, often used to record a single note played by a real instrument. I can be pitch-shifted up or down, and can include a looped portion to allow a sound to be sustained indefinitely.
addReleaseEnvelope
Add a simple release envelope to this sound.
beUnlooped
comeFullyUpOnReload:
Convert my sample buffers from ByteArrays into SampleBuffers after raw loading from a DataStream. Answer myself.
computeSampleCountForRelease
Calculate the number of samples before the end of the note after which looping back will be be disabled. The units of this value, sampleCountForRelease, are samples at the original sampling rate. When playing a specific note, this value is converted to releaseCount, which is number of samples to be computed at the current pitch and sampling rate.
copyDownSampledLowPassFiltering:
Answer a copy of the receiver at half its sampling rate. The result consumes half the memory space, but has only half the frequency range of the original. If doFiltering is true, the original sound buffers are low-pass filtered before down-sampling. This is slower, but prevents aliasing of any high-frequency components of the original signal. (While it may be possible to avoid low-pass filtering when down-sampling from 44.1 kHz to 22.05 kHz, it is probably essential when going to lower sampling rates.)
downSampleLowPassFiltering:
Cut my sampling rate in half. Use low-pass filtering (slower) if doFiltering is true.
duration
Answer the duration of this sound in seconds.
duration:
Scale my envelopes to the given duration. Subclasses overriding this method should include a resend to super.
fftAt:
Answer the Fast Fourier Transform (FFT) of my samples (only the left channel, if stereo) starting at the given index.
fftWindowSize:startingAt:
Answer a Fast Fourier Transform (FFT) of the given number of samples starting at the given index (the left channel only, if stereo). The window size will be rounded up to the nearest power of two greater than the requested size. There must be enough samples past the given starting index to accomodate this window size.
findStartPointAfter:
Answer the index of the last zero crossing sample before the given index.
findStartPointForThreshold:
Answer the index of the last zero crossing sample before the first sample whose absolute value (in either the right or left channel) exceeds the given threshold.
firstSample
firstSample:
fromAIFFFileNamed:mergeIfStereo:
Initialize this sound from the data in the given AIFF file. If mergeFlag is true and the file is stereo, its left and right channels are mixed together to produce a mono sampled sound.
fromAIFFFileReader:mergeIfStereo:
Initialize this sound from the data in the given AIFF file. If mergeFlag is true and the file is stereo, its left and right channels are mixed together to produce a mono sampled sound.
gain
gain:
highestSignificantFrequencyAt:
Answer the highest significant frequency in the sample window starting at the given index. The a frequency is considered significant if it's power is at least 1/50th that of the maximum frequency component in the frequency spectrum.
indexOfFirstPointOverThreshold:
Answer the index of the first sample whose absolute value exceeds the given threshold.
initialize
This default initialization creates a loop consisting of a single cycle of a sine wave.
isLooped
isStereo
Answer true if this sound has distinct left and right channels. (Every sound plays into a stereo sample buffer, but most sounds, which produce exactly the same samples on both channels, are not stereo.)
leftSamples
leftSamples:
loopEnd
loopLength
mixSampleCount:into:startingAt:leftVol:rightVol:
Play samples from a wave table by stepping a fixed amount through the table on every sample. The table index and increment are scaled to allow fractional increments for greater pitch accuracy. If a loop length is specified, then the index is looped back when the loopEnd index is reached until count drops below releaseCount. This allows a short sampled sound to be sustained indefinitely.
normalizedResultsFromFFT:
Answer an array whose size is half of the FFT window size containing power in each frequency band, normalized to the average power over the entire FFT. A value of 10.0 in this array thus means that the power at the corresponding frequences is ten times the average power across the entire FFT.
objectForDataStream:
Answer an object to store on a data stream, a copy of myself whose SampleBuffers have been converted into ByteArrays.
originalSamplingRate
For sampled sounds, answer the sampling rate used to record the stored samples. For other sounds, this is the same as the playback sampling rate.
perceivedPitch
pitch
pitch:
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
rightSamples
rightSamples:
samples
For compatibility with SampledSound. Just return my left channel (which is the only channel if I am mono).
samples:loopEnd:loopLength:pitch:samplingRate:
Make this sound use the given samples array with a loop of the given length starting at the given index. The loop length may have a fractional part; this is necessary to achieve pitch accuracy for short loops.
samplesRemaining
Answer the number of samples remaining until the end of this sound.
setPitch:dur:loudness:
(LoopedSampledSound pitch: 440.0 dur: 2.5 loudness: 0.4) play
stopAfterMSecs:
Terminate this sound this note after the given number of milliseconds.
storeSampleCount:bigEndian:on:
Store my samples on the given stream at the current SoundPlayer sampling rate. If bigFlag is true, then each 16-bit sample is stored most-significant byte first (AIFF files), otherwise it is stored least-significant byte first (WAV files).
unloopedSamples:pitch:samplingRate:
Make this sound play the given samples unlooped. The samples have the given perceived pitch when played at the given sampling rate. By convention, unpitched sounds such as percussion sounds should specify a pitch of nil or 100 Hz.
MIDIFileReader
A reader for Standard 1.0 format MIDI files.
MIDI File Types:
type 0 -- one multi-channel track
type 1 -- one or more simultaneous tracks
type 2 -- a number on independent single-track patterns
Instance variables:
stream source of MIDI data
fileType MIDI file type
trackCount number of tracks in file
ticksPerQuarter number of ticks per quarter note for all tracks in this file
tracks collects track data for non-empty tracks
strings collects all strings in the MIDI file
tempoMap nil or a MIDITrack consisting only of tempo change events
trackStream stream on buffer containing track chunk
track track being read
activeEvents notes that have been turned on but not off
asScore
endAllNotesAt:
End of score; end any notes still sounding.
endNote:chan:at:
guessMissingInstrumentNames
Attempt to guess missing instrument names from the first program change in that track.
isTempoTrack:
Return true if the given event list is non-empty and contains only tempo change events.
metaEventAt:
Read a meta event. Event types appear roughly in order of expected frequency.
next16BitWord
Read a 16-bit positive integer from the input stream, most significant byte first.
next32BitWord:
Read a 32-bit positive integer from the input stream.
readChunkSize
Read a 32-bit positive integer from the next 4 bytes, most significant byte first.
readChunkType
Read a chunk ID string from the next 4 bytes.
readHeaderChunk
readMIDIFrom:
Read one or more MIDI tracks from the given binary stream.
readTrackChunk
readTrackContents:
readTrackEvents
Read the events of the current track.
readVarLengthIntFrom:
Read a one to four byte positive integer from the given stream, most significant byte first. Use only the lowest seven bits of each byte. The highest bit of a byte is set for all bytes except the last.
report:
riffSkipToMidiChunk
This file is a RIFF file which may (or may not) contain a MIDI chunk. Thanks to Andreas Raab for this code.
scanForMIDIHeader
Scan the first part of this file in search of the MIDI header string 'MThd'. Report an error if it is not found. Otherwise, leave the input stream positioned to the first byte after this string.
scoreFromFileNamed:
scoreFromStream:
scoreFromURL:
splitIntoTracks
Split a type zero MIDI file into separate tracks by channel number.
standardMIDIInstrumentNames
startNote:vel:chan:at:
Record the beginning of a note.
trackContainsNotes:
Answer true if the given track contains at least one note event.
MIDIInputParser
I am a parser for a MIDI data stream. I support:
real-time MIDI recording,
overdubbing (recording while playing),
monitoring incoming MIDI, and
interactive MIDI performances.
Note: MIDI controllers such as pitch benders and breath controllers generate large volumes of data which consume processor time. In cases where this information is not of interest to the program using it, it is best to filter it out as soon as possible. I support various options for doing this filtering, including filtering by MIDI channel and/or by command type.
clearBuffers
Clear the MIDI record buffers. This should be called at the start of recording or real-time MIDI processing.
endSysExclusive:
Error! Received 'end system exclusive' command when not receiving system exclusive data.
ignoreChannel:
Don't record any events arriving on the given MIDI channel (in the range 1-16).
ignoreCommand:
Don't record the given MIDI command on any channel.
ignoreOne:
Ignore a one argument command.
ignoreSysEx:
If the argument is true, then ignore incoming system exclusive message.
ignoreTuneAndRealTimeCommands
Ignore tuning requests and real-time commands.
ignoreTwo:
Ignore a two argument command.
ignoreZero:
Ignore a zero argument command, such as tune request or a real-time message. Stay in the current and don't change active status. Note that real-time messages can arrive between data bytes without disruption.
initialize
Subclasses should redefine this method to perform initializations on instance creation
midiDo:
Poll the incoming MIDI stream in real time and call the given block for each complete command that has been received. The block takes one argument, which is an array of the form (<time><cmd byte>[<arg1>[<arg2>]]). The number of arguments depends on the command byte. For system exclusive commands, the argument is a ByteArray containing the system exclusive message.
midiDoUntilMouseDown:
Process the incoming MIDI stream in real time by calling midiActionBlock for each MIDI event. This block takes three arguments: the MIDI command byte and two argument bytes. One or both argument bytes may be nil, depending on the MIDI command. If not nil, evaluatue idleBlock regularly whether MIDI data is available or not. Pressing any mouse button terminates the interaction.
midiPort
midiPort:
Use the given MIDI port.
monitor
Print MIDI messages to the transcript until any mouse button is pressed.
noFiltering
Revert to accepting all MIDI commands on all channels. This undoes any earlier request to filter the incoming MIDI stream.
on:
printCmd:with:with:
Print the given MIDI command.
processByte:
Process the given incoming MIDI byte and record completed commands.
processMIDIData
Process all MIDI data that has arrived since the last time this method was executed. This method should be called frequently to process, filter, and timestamp MIDI data as it arrives.
received
Answer my current collection of all MIDI commands received. Items in this list have the form (<time><cmd byte>[<arg1>[<arg2>]]). Note that the real-time processing facility, midiDo:, removes items from this list as it processes them.
recordOne:
Record a one argument command at the current time.
recordOnlyChannels:
Record only MIDI data arriving on the given list of channel numbers (in the range 1-16).
recordTwo:
Record a two argument command at the current time.
recordZero:
Record a zero-byte message, such as tune request or a real-time message. Don't change active status. Note that real-time messages can arrive between data bytes without disruption.
setMIDIPort:
Initialize this instance for recording from the given MIDI port. Tune and real-time commands are filtered out by default; the client can send noFiltering to receive these messages.
startSysExclusive:
The beginning of a variable length 'system exclusive' command.
undefined:
We have received an unexpected MIDI byte (e.g., a data byte when we were expecting a command). This should never happen.
MIDIScore
A MIDIScore is a container for a number of MIDI tracks as well as an ambient track for such things as sounds, book page triggers and other related events.
addAmbientEvent:
ambientEventAfter:ticks:
ambientTrack
appendEvent:fullDuration:at:
It is assumed that the noteEvent already has the proper time
cutSelection:
durationInTicks
eventForTrack:after:ticks:
eventMorphsDo:
Evaluate aBlock for all morphs related to the ambient events.
eventMorphsWithTimeDo:
Evaluate aBlock for all morphs and times related to the ambient events.
gridToNextQuarterNote:
gridToQuarterNote:
gridTrack:toQuarter:at:
initialize
Subclasses should redefine this method to perform initializations on instance creation
insertEvents:at:
jitterStartAndEndTimesBy:
pauseFrom:
removeAmbientEventWithMorph:
resetFrom:
resumeFrom:
tempoMap
tempoMap:
ticksPerQuarterNote
ticksPerQuarterNote:
trackInfo
trackInfo:
tracks
tracks:
MIDISynth
I implement a simple real-time MIDI synthesizer on platforms that support MIDI input. I work best on platforms that allow the sound buffer to be made very short--under 50 milliseconds is good and under 20 milliseconds is preferred (see below). The buffer size is changed by modifying the class initialization method of SoundPlayer and executing the do-it there to re-start the sound player.
Each instance of me takes input from a single MIDI input port. Multiple instances of me can be used to handle multiple MIDI input ports. I distribute incoming commands among my sixteen MIDISynthChannel objects. Most of the interpretation of the MIDI commands is done by these channel objects.
Buffer size notes: At the moment, most fast PowerPC Macintosh computers can probably work with buffer sizes down to 50 milliseconds, and the Powerbook G3 works down to about 15 milliseconds. You will need to experiment to discover the minimum buffer size that does not result in clicking during sound output. (Hint: Be sure to turn off power cycling on your Powerbook. Other applications and extensions can steal cycles from Squeak, causing intermittent clicking. Experimentation may be necessary to find a configuration that works for you.)
channel:
closeMIDIPort
example
initialize
Subclasses should redefine this method to perform initializations on instance creation
instrumentForChannel:
instrumentForChannel:put:
isOn
midiParser
midiPort
midiPort:
midiTrackingLoop
mutedForChannel:put:
panForChannel:
panForChannel:put:
processMIDI
Process some MIDI commands. Answer true if any commands were processed.
processMIDIUntilMouseDown
Used for debugging. Do MIDI processing until the mouse is pressed.
setAllChannelMasterVolumes:
startMIDITracking
stopMIDITracking
volumeForChannel:
volumeForChannel:put:
MIDISynthChannel
I implement one polyphonic channel of a 16-channel MIDI synthesizer. Many MIDI commands effect all the notes played on a particular channel, so I record the state for a single channel, including a list of notes currently playing.
This initial implementation is extremely spartan, having just enough functionality to play notes. Things that are not implemented include:
1. program changes
2. sustain pedal
3. aftertouch (either kind)
4. most controllers
5. portamento
6. mono-mode
adjustPitch:
Handle a pitch-bend change.
channelPressure:
Handle a channel pressure (channel aftertouch) change.
control:value:
Handle a continuous controller change.
convertVelocity:
Map a value in the range 0..127 to a volume in the range 0.0..1.0.
doChannelCmd:byte1:byte2:
Dispatch a channel command with the given arguments.
initialize
Subclasses should redefine this method to perform initializations on instance creation
instrument
instrument:
key:pressure:
Handle a key pressure (polyphonic aftertouch) change. Rarely implemented.
keyDown:vel:
Handle a key down event with non-zero velocity.
keyUp:vel:
Handle a key up event.
masterVolume
masterVolume:
Set the master volume the the given value (0.0 to 1.0).
muted
muted:
newVolume:
Set the channel volume to the level given by the given number in the range 0..127.
pan
pan:
Set the left-right pan to the given value (0.0 to 1.0).
pitchBend:
Handle a pitch-bend change.
programChange:
Handle a program (instrument) change.
MixedSound
A MixedSound is xxxxxxxxx.
Instance Variables
leftVols: <Object>
rightVols: <Object>
soundDone: <Object>
sounds: <Object>
leftVols
- xxxxx
rightVols
- xxxxx
soundDone
- xxxxx
sounds
- xxxxx
+
Return the mix of the receiver and the argument sound.
add:
Add the given sound with a pan setting of centered and no attenuation.
add:pan:
Add the given sound with the given left-right panning and no attenuation.
add:pan:volume:
Add the given sound with the given left-right pan, where 0.0 is full left, 1.0 is full right, and 0.5 is centered. The loudness of the sound will be scaled by volume, which ranges from 0 to 1.0.
copy
Copy my component sounds.
copySounds
Private! Support for copying. Copy my component sounds and settings array.
doControl
Update the control parameters of this sound using its envelopes, if any.
duration
Answer the duration of this sound in seconds.
initialize
Subclasses should redefine this method to perform initializations on instance creation
isStereo
Answer true if this sound has distinct left and right channels. (Every sound plays into a stereo sample buffer, but most sounds, which produce exactly the same samples on both channels, are not stereo.)
mixSampleCount:into:startingAt:leftVol:rightVol:
Play a number of sounds concurrently. The level of each sound can be set independently for the left and right channels.
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
sounds
Allows simple sounds to behave as, eg, sequential sounds
stopGracefully
End this note with a graceful decay. If the note has envelopes, determine the decay time from its envelopes.
MuLawCodec
I represent a mu-law (u-law) codec. I compress sound data by a factor of 2:1 by encoding the most significant 12 bits of each 16-bit sample as a signed, exponentially encoded byte. The idea is to use more resolution for smaller lower sample values. This encoding was developed for the North American phone system and a variant of it, a-law, is a European phone standard. It is a popular sound encoding on Unix platforms (.au files).
bytesPerEncodedFrame
Answer the number of bytes required to hold one frame of compressed sound data. Answer zero if this codec produces encoded frames of variable size.
decodeFrames:from:at:into:at:
Decode the given number of monophonic frames starting at the given index in the given ByteArray of compressed sound data and storing the decoded samples into the given SoundBuffer starting at the given destination index. Answer a pair containing the number of bytes of compressed data consumed and the number of decompressed samples produced.
encodeFrames:from:at:into:at:
Encode the given number of frames starting at the given index in the given monophonic SoundBuffer and storing the encoded sound data into the given ByteArray starting at the given destination index. Encode only as many complete frames as will fit into the destination. Answer a pair containing the number of samples consumed and the number of bytes of compressed data produced.
initialize
Subclasses should redefine this method to perform initializations on instance creation
samplesPerFrame
Answer the number of sound samples per compression frame.
uLawDecodeSample:
Decode a 16-bit signed sample from 8 bits using uLaw decoding
uLawEncode12Bits:
Encode a 12-bit unsigned sample (0-4095) into 7 bits using uLaw encoding.
This gets called by a method that scales 16-bit signed integers down to a
12-bit magnitude, and then ORs in 16r80 if they were negative.
Detail: May get called with s >= 4096, and this works fine.
uLawEncodeSample:
Encode a 16-bit signed sample into 8 bits using uLaw encoding
NoteEvent
Represents a note on or off event in a MIDI score.
channel
channel:
duration
duration:
endNoteOnMidiPort:
Output a noteOff event to the given MIDI port. (Actually, output a noteOff event with zero velocity. This does the same thing, but allows running status to be used when sending a mixture of note on and off commands.)
endTime
Subclasses should override to return the ending time if the event has some duration.
isNoteEvent
key:velocity:channel:
keyName
Return a note name for my pitch.
midiKey
midiKey:
pitch
Convert my MIDI key number to a pitch and return it.
printOn:
Append to the argument, aStream, a sequence of characters that
identifies the receiver.
startNoteOnMidiPort:
Output a noteOn event to the given MIDI port.
velocity
velocity:
PianoRollNoteMorph
A PianoRollNoteMorph is drawn as a simple mroph, but it carries the necessary state to locate its source sound event via its owner (a PianorRollScoreMorph) and the score therein. Simple editing of pitch and time placement is provided here.
deselect
drawOn:
editPitch:
fullBounds
Return the bounding box of the receiver and all its children. Recompute the layout if necessary.
gridToNextQuarter
gridToPrevQuarter
handlesMouseDown:
Do I want to receive mouseDown events (mouseDown:, mouseMove:, mouseUp:)?
indexInTrack
invokeNoteMenu:
Invoke the note's edit menu.
mouseDown:
Handle a mouse down event. The default response is to let my
eventHandler, if any, handle it.
mouseMove:
Handle a mouse move event. The default response is to let my eventHandler, if any, handle it.
mouseUp:
Handle a mouse up event. The default response is to let my eventHandler, if any, handle it.
noteInScore
noteOfDuration:
playSound
This STARTS a single long sound. It must be stopped by playing another or nil.
playSound:
select
selectFrom:
selectNotes:
selected
soundOfDuration:
trackIndex
trackIndex:indexInTrack:
PitchBendEvent
A PitchBendEvent is xxxxxxxxx.
Instance Variables
bend: <Object>
channel: <Object>
bend
- xxxxx
channel
- xxxxx
bend
bend:
bend:channel:
channel
channel:
isPitchBend
outputOnMidiPort:
Output this event to the given MIDI port.
printOn:
Append to the argument, aStream, a sequence of characters that
identifies the receiver.
PitchEnvelope
A PitchEnvelope is xxxxxxxxx.
Instance Variables
centerPitch: <Object>
centerPitch
- xxxxx
centerPitch
centerPitch:
Set the center pitch of a pitch-controlling envelope. This default implementation does nothing.
updateSelector
Needed by the envelope editor.
updateTargetAt:
Update the pitch for my target. Answer true if the value changed.
PluckedSound
The Karplus-Strong plucked string algorithm: start with a buffer full of random noise and repeatedly play the contents of that buffer while averaging adjacent samples. High harmonics damp out more quickly, transfering their energy to lower ones. The length of the buffer corresponds to the length of the string. Fractional indexing is used to allow precise tuning; without this, the pitch would be rounded to the pitch corresponding to the nearest buffer size.
copy
A sound should copy all of the state needed to play itself, allowing two copies of a sound to play at the same time. These semantics require a recursive copy but only down to the level of immutable data. For example, a SampledSound need not copy its sample buffer. Subclasses overriding this method should include a resend to super.
copyRing
Private! Support for copying
default
duration
Answer the duration of this sound in seconds.
duration:
Scale my envelopes to the given duration. Subclasses overriding this method should include a resend to super.
mixSampleCount:into:startingAt:leftVol:rightVol:
The Karplus-Strong plucked string algorithm: start with a buffer full of random noise and repeatedly play the contents of that buffer while averaging adjacent samples. High harmonics damp out more quickly, transfering their energy to lower ones. The length of the buffer corresponds to the length of the string.
reset
Fill the ring with random noise.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
setPitch:dur:loudness:
Initialize my envelopes for the given parameters. Subclasses overriding this method should include a resend to super.
stopAfterMSecs:
Terminate this sound this note after the given number of milliseconds.
ProgramChangeEvent
A ProgramChangeEvent is xxxxxxxxx.
Instance Variables
channel: <Object>
program: <Object>
channel
- xxxxx
program
- xxxxx
channel
channel:
isProgramChange
outputOnMidiPort:
Output this event to the given MIDI port.
printOn:
Append to the argument, aStream, a sequence of characters that
identifies the receiver.
program
program:
program:channel:
QueueSound
I am a queue for sound - give me a bunch of sounds to play and I will play them one at a time in the order that they are received.
Example:
"Here is a simple example which plays two sounds three times."
| clink warble queue |
clink _ SampledSound soundNamed: 'clink'.
warble _ SampledSound soundNamed: 'warble'.
queue _ QueueSound new.
3 timesRepeat:[
queue add: clink; add: warble
].
queue play.
Structure:
startTime Integer -- if present, start playing when startTime <= Time millisecondClockValue
(schedule the sound to play later)
sounds SharedQueue -- the synchronized list of sounds.
currentSound AbstractSound -- the currently active sound
done Boolean -- am I done playing ?
Other:
You may want to keep track of the queue's position so that you can feed it at an appropriate rate. To do this in an event driven way, modify or subclass nextSound to notify you when appropriate. You could also poll by checking currentSound, but this is not recommended for most applications.
add:
currentSound
currentSound:
doControl
Update the control parameters of this sound using its envelopes, if any.
done:
initialize
Subclasses should redefine this method to perform initializations on instance creation
mixSampleCount:into:startingAt:leftVol:rightVol:
Play a collection of sounds in sequence.
nextSound
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
sounds
Allows simple sounds to behave as, eg, sequential sounds
startTime
startTime:
RandomEnvelope
A RandomEnvelope is xxxxxxxxx.
Instance Variables
delta: <Object>
highLimit: <Object>
lowLimit: <Object>
rand: <Object>
delta
- xxxxx
highLimit
- xxxxx
lowLimit
- xxxxx
rand
- xxxxx
centerPitch:
If this envelope controls pitch, set its scale to the given number. Otherwise, do nothing.
delta
delta:
duration
Return the time of the final point.
duration:
Do nothing.
for:
highLimit
highLimit:
initialize
Subclasses should redefine this method to perform initializations on instance creation
lowLimit
lowLimit:
name
Answer a name for the receiver. This is used generically in the title of certain inspectors, such as the referred-to inspector, and specificially by various subsystems. By default, we let the object just print itself out..
points
setPoints:loopStart:loopEnd:
sustainEnd:
Do nothing.
updateTargetAt:
Send my updateSelector to the given target object with the value of this envelope at the given number of milliseconds from its onset. Answer true if the value changed.
volume:
If this envelope controls volume, set its scale to the given number. Otherwise, do nothing.
RepeatingSound
A RepeatingSound is xxxxxxxxx.
Instance Variables
iteration: <Object>
iterationCount: <Object>
samplesPerIteration: <Object>
sound: <Object>
iteration
- xxxxx
iterationCount
- xxxxx
samplesPerIteration
- xxxxx
sound
- xxxxx
carMotorSound
carMotorSound:
copy
Copy my component sound.
copySound
Private! Support for copying. Copy my component sound.
doControl
Update the control parameters of this sound using its envelopes, if any.
initializeCarMotor
iterationCount
iterationCount:
mixSampleCount:into:startingAt:leftVol:rightVol:
Play a collection of sounds in sequence.
repeat:count:
repeatForever:
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
setPitch:dur:loudness:
Initialize my envelopes for the given parameters. Subclasses overriding this method should include a resend to super.
setSound:iterations:
Initialize the receiver to play the given sound the given number of times. If iteration count is the symbol #forever, then repeat indefinitely.
sound
sound:
RestSound
A RestSound is xxxxxxxxx.
Instance Variables
count: <Object>
initialCount: <Object>
count
- xxxxx
initialCount
- xxxxx
dur:
duration
Answer the duration of this sound in seconds.
duration:
Scale my envelopes to the given duration. Subclasses overriding this method should include a resend to super.
mixSampleCount:into:startingAt:leftVol:rightVol:
Play silence for a given duration.
pitch:dur:loudness:
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
samples
Answer a monophonic sample buffer containing my samples. The left and write channels are merged.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
setDur:
Set rest duration in seconds.
ReverbSound
A ReverbSound is xxxxxxxxx.
Instance Variables
bufferIndex: <Object>
bufferSize: <Object>
leftBuffer: <Object>
rightBuffer: <Object>
sound: <Object>
tapCount: <Object>
tapDelays: <Object>
tapGains: <Object>
bufferIndex
- xxxxx
bufferSize
- xxxxx
leftBuffer
- xxxxx
rightBuffer
- xxxxx
sound
- xxxxx
tapCount
- xxxxx
tapDelays
- xxxxx
tapGains
- xxxxx
applyReverbTo:startingAt:count:
copy
Copy my component sound.
copySound
Private! Support for copying. Copy my component sound.
doControl
Update the control parameters of this sound using its envelopes, if any.
mixSampleCount:into:startingAt:leftVol:rightVol:
Play my sound with reverberation.
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
sound
sound:
tapDelays:gains:
ReverbSound new tapDelays: #(537 691 1191) gains: #(0.07 0.07 0.07)
SampledInstrument
I represent a collection of individual notes at different pitches, volumes, and articulations. On request, I can select the best note to use for a given pitch, duration, and volume. I currently only support two volumes, loud and soft, and two articulations, normal and staccato, but I can easily be extended to include more. The main barrier to keeping more variations is simply the memory space (assuming my component notes are sampled sounds).
allNotes
Answer a collection containing of all the unique sampled sounds used by this instrument.
allSampleSets:
buildSmallOrchestra
chooseSamplesForPitch:from:
From the given collection of LoopedSampledSounds, choose the best one to be pitch-shifted to produce the given pitch.
initialize
Subclasses should redefine this method to perform initializations on instance creation
loudThreshold
loudThreshold:
memorySpace
Answer the number of bytes required to store the samples for this instrument.
midiKeyMapFor:
Return a 128 element array that maps each MIDI key number to the sampled note from the given set with the closests pitch. A precise match isn't necessary because the selected note will be pitch shifted to play at the correct pitch.
playChromaticRunFrom:to:
pruneNoteList:notesPerOctave:
Return a pruned version of the given note list with only the given number of notes per octave. Assume the given notelist is in sorted order.
pruneToNotesPerOctave:
Prune all my keymaps to the given number of notes per octave.
pruneToSingleNote:
Fill all my keymaps with the given note.
readLoudAndStaccatoInstrument:fromDirectory:
readPizzInstrument:fromDirectory:
readSampleSetFrom:
Answer a collection of sounds read from AIFF files in the given directory and sorted in ascending pitch order.
readSampleSetInfoFrom:
MessageTally spyOn: [SampledInstrument new readSampleSetFrom: 'Tosh:Desktop Folder:AAA Squeak2.0 Beta:Organ Samples:Flute8'] timeToRun
readSimpleInstrument:fromDirectory:
soundForMidiKey:dur:loudness:
Answer an initialized sound object that generates a note for the given MIDI key (in the range 0..127), duration (in seconds), and loudness (in the range 0.0 to 1.0).
soundForPitch:dur:loudness:
Answer an initialized sound object that generates a note of the given pitch, duration, and loudness. Pitch may be a numeric pitch or a string pitch name such as 'c4'. Duration is in seconds and loudness is in the range 0.0 to 1.0.
staccatoLoudAndSoftSampleSet:
staccatoLoudSampleSet:
staccatoSoftSampleSet:
sustainedLoudSampleSet:
sustainedSoftSampleSet:
sustainedThreshold
sustainedThreshold:
testAtPitch:
SampledInstrument testAtPitch: 'c4'
trimAttackOf:threshold:
Trim 'silence' off the initial attacks of the given sound buffer.
trimAttacks:
Trim 'silence' off the initial attacks all my samples.
SampledSound
A SampledSound is xxxxxxxxx.
Instance Variables
count: <Object>
indexHighBits: <Object>
initialCount: <Object>
originalSamplingRate: <Object>
samples: <Object>
samplesSize: <Object>
scaledIncrement: <Object>
scaledIndex: <Object>
count
- xxxxx
indexHighBits
- xxxxx
initialCount
- xxxxx
originalSamplingRate
- xxxxx
samples
- xxxxx
samplesSize
- xxxxx
scaledIncrement
- xxxxx
scaledIndex
- xxxxx
addLibrarySoundNamed:fromAIFFfileNamed:
addLibrarySoundNamed:samples:samplingRate:
assimilateSoundsFrom:
beep
coffeeCupClink
compressWith:
compressWith:atRate:
convert8bitSignedFrom:to16Bit:
convert8bitSignedTo16Bit:
convert8bitUnsignedTo16Bit:
convertBytesTo16BitSamples:mostSignificantByteFirst:
defaultSampleTable:
defaultSamples:repeated:
duration
duration:
Scale my envelopes to the given duration. Subclasses overriding this method should include a resend to super.
endGracefully
See stopGracefully, which affects initialCOunt, and I don't think it should (di).
fromAIFFfileNamed:
fromWaveFileNamed:
fromWaveStream:
initialize
Subclasses should redefine this method to perform initializations on instance creation
initializeCoffeeCupClink
mixSampleCount:into:startingAt:leftVol:rightVol:
Mix the given number of samples with the samples already in the given buffer starting at the given index. Assume that the buffer size is at least (index + count) - 1.
next16BitWord:from:
next32BitWord:from:
nominalSamplePitch:
originalSamplingRate
For sampled sounds, answer the sampling rate used to record the stored samples. For other sounds, this is the same as the playback sampling rate.
pitch:
playSilentlyUntil:
Used to fast foward to a particular starting time.
Overridden to be instant for sampled sounds.
playSoundNamed:
putCoffeeCupClinkInSoundLibrary
readWaveChunk:inRIFF:
removeSoundNamed:
reset
Details: The sample index and increment are scaled to allow fractional increments without having to do floating point arithmetic in the inner loop.
samples
Answer a monophonic sample buffer containing my samples. The left and write channels are merged.
samples:samplingRate:
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
setPitch:dur:loudness:
Used to play scores using the default sample table.
setSamples:samplingRate:
Set my samples array to the given array with the given nominal sampling rate. Altering the rate parameter allows the sampled sound to be played back at different pitches.
setScaledIncrement:
sonogramMorph:from:to:nPoints:
FYI: It is very cool that we can do this, but for sound tracks on a movie,
simple volume is easier to read, easier to scale, and way faster to compute.
Code preserved here just in case it makes a useful example.
soundLibrary
soundNamed:
soundNamed:ifAbsent:
soundNames
stopAfterMSecs:
Terminate this sound this note after the given number of milliseconds.
storeSampleCount:bigEndian:on:
Store my samples on the given stream at the current SoundPlayer sampling rate. If bigFlag is true, then each 16-bit sample is stored most-significant byte first (AIFF files), otherwise it is stored least-significant byte first (WAV files).
uLawDecode:
uLawDecodeTable
uLawEncode:
uLawEncodeSample:
universalSoundKeys
unusedSoundNameLike:
useCoffeeCupClink
volumeForm:from:to:nSamplesPerPixel:
Note: nPerPixel can be Integer or Float for pixel-perfect alignment.
ScorePlayer
This is a real-time player for MIDI scores (i.e., scores read from MIDI files). Score can be played using either the internal sound synthesis or an external MIDI synthesizer on platforms that support MIDI output.
adjustVolumeTo:overMSecs:
Adjust the volume of this sound to the given volume, a number in the range [0.0..1.0], over the given number of milliseconds. The volume will be changed a little bit on each sample until the desired volume is reached.
closeMIDIPort
Stop using MIDI for output. Music will be played using the built-in sound synthesis.
copy
Copy my component sounds.
copySounds
Private! Support for copying.
disableReverb:
doControl
Update the control parameters of this sound using its envelopes, if any.
duration
Answer the duration in seconds of my MIDI score when played at the current rate. Take tempo changes into account.
durationInTicks
infoForTrack:
Return the info string for the given track.
initialize
Subclasses should redefine this method to perform initializations on instance creation
instrumentForTrack:
instrumentForTrack:put:
isDone
isStereo
Answer true if this sound has distinct left and right channels. (Every sound plays into a stereo sample buffer, but most sounds, which produce exactly the same samples on both channels, are not stereo.)
jumpToTick:
midiPlayLoop
midiPort
millisecondsSinceStart
Answer the approximate number of milliseconds of real time since the beginning of the score. Since this calculation uses the current tempo, which can change throughout the piece, it is safer to use ticksSinceStart for synchronization.
mixSampleCount:into:startingAt:leftVol:rightVol:
Play a number of sounds concurrently. The level of each sound can be set independently for the left and right channels.
mutedForTrack:
mutedForTrack:put:
mutedState
onScore:
openMIDIPort:
Open the given MIDI port. Music will be played as MIDI commands to the given MIDI port.
overallVolume
overallVolume:
Set the overally playback volume to a value between 0.0 (off) and 1.0 (full blast).
panForTrack:
panForTrack:put:
Set the left-right pan for this track to a value in the range [0.0..1.0], where 0.0 means full-left.
pause
Pause this sound. It can be resumed from this point, or reset and resumed to start from the beginning.
positionInScore
positionInScore:
processAllAtTick:
processAmbientEventsAtTick:
Process ambient events through the given tick.
processMIDIEventsAtTick:
Process note events through the given score tick using MIDI.
processNoteEventsAtTick:
Process note events through the given score tick using internal Squeak sound synthesis.
processTempoMapAtTick:
Process tempo changes through the given score tick.
rate
rate:
Set the playback rate. For example, a rate of 2.0 will playback at twice normal speed.
repeat
Return true if this player will repeat when it gets to the end of the score, false otherwise.
repeat:
Turn repeat mode on or off.
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
resumePlaying
Resume playing. Start over if done.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
score
secsPerTick
settingsString
skipAmbientEventsThruTick:
Skip ambient events through the given score tick.
skipNoteEventsThruTick:
Skip note events through the given score tick using internal Squeak sound synthesis.
startMIDIPlaying
Start up a process to play this score via MIDI.
startNote:forStartTick:trackIndex:
Prepare a note to begin playing at the given tick. Used to start playing at an arbitrary point in the score. Handle both MIDI and built-in synthesis cases.
stopMIDIPlaying
Terminate the MIDI player process and turn off any active notes.
tempo
Return the current tempo in beats (quarter notes) per minute. The tempo at any given moment is defined by the score and cannot be changed by the client. To change the playback speed, the client may change the rate parameter.
tempoOrRateChanged
This method should be called after changing the tempo or rate.
ticksForMSecs:
ticksSinceStart
Answer the number of score ticks that have elapsed since this piece started playing. The duration of a tick is determined by the MIDI score.
ticksSinceStart:
Adjust ticks to folow, eg, piano roll autoscrolling
trackCount
turnOffActiveMIDINotesAt:
Turn off any active MIDI notes that should be turned off at the given score tick.
updateDuration
volumeForTrack:
volumeForTrack:put:
SequentialSound
A SequentialSound is xxxxxxxxx.
Instance Variables
currentIndex: <Object>
sounds: <Object>
currentIndex
- xxxxx
sounds
- xxxxx
,
Return the concatenation of the receiver and the argument sound.
add:
compressWith:
compressWith:atRate:
copy
Copy my component sounds.
copySounds
Private! Support for copying. Copy my component sounds.
doControl
Update the control parameters of this sound using its envelopes, if any.
duration
Answer the duration of this sound in seconds.
initialize
Subclasses should redefine this method to perform initializations on instance creation
mixSampleCount:into:startingAt:leftVol:rightVol:
Play a collection of sounds in sequence.
pruneFinishedSounds
Remove any sounds that have been completely played.
removeFirstCompleteSoundOrNil
Remove the first sound if it has been completely recorded.
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
samplesRemaining
Answer the number of samples remaining until the end of this sound. A sound with an indefinite ending time should answer some large integer such as 1000000.
sounds
Allows simple sounds to behave as, eg, sequential sounds
transformSounds:
Private! Support for copying. Copy my component sounds.
SimpleMIDIPort
This is a first cut at a simple MIDI output port.
bufferTimeStampFrom:
Return the timestamp from the given MIDI input buffer. Assume the given buffer is at least 4 bytes long.
close
Close this MIDI port.
closeAllPorts
ensureOpen
Make sure this MIDI port is open. It is good to call this before starting to use a port in case an intervening image save/restore has caused the underlying hardware port to get closed.
examplePlayNoteNamedVelocityOnChannel
flushInput
Read any lingering MIDI data from this port's input buffer.
initialize
Subclasses should redefine this method to perform initializations on instance creation
inputPortNumFromUser
midiCmd:channel:byte:
Immediately output the given MIDI command with the given channel and argument byte to this MIDI port. Assume that the port is open.
midiCmd:channel:byte:byte:
Immediately output the given MIDI command with the given channel and argument bytes to this MIDI port. Assume that the port is open.
midiInstruments
midiIsSupported
midiOutput:
Output the given bytes to this MIDI port immediately. Assume that the port is open.
openDefault
openOnPortNumber:
Open this MIDI port on the given port number.
outputPortNumFromUser
percussionInstruments
playNote:onChannel:
playNote:velocity:onChannel:
playNoteNamed:onChannel:
playNoteNamed:velocity:onChannel:
portDescription:
portNumber
Answer my port number.
primMIDIClosePort:
Close the given MIDI port. Don't fail if port is already closed.
primMIDIOpenPort:readSemaIndex:interfaceClockRate:
Open the given MIDI port. If non-zero, readSemaIndex specifies the index in the external objects array of a semaphore to be signalled when incoming MIDI data is available. Not all platforms support signalling the read semaphore. InterfaceClockRate specifies the clock rate of the external MIDI interface adaptor on Macintosh computers; it is ignored on other platforms.
primMIDIReadPort:into:
Read any available MIDI data into the given buffer (up to the size of the buffer) and answer the number of bytes read.
primMIDIWritePort:from:at:
Queue the given data to be sent through the given MIDI port at the given time. If midiClockValue is zero, send the data immediately.
primPortCount
primPortDirectionalityOf:
primPortNameOf:
readInto:
Read any data from this port into the given buffer.
stopNote:onChannel:
stopNote:velocity:onChannel:
stopNoteNamed:onChannel:
stopNoteNamed:velocity:onChannel:
useInstrument:onChannel:
useInstrumentNumber:onChannel:
Sonogram
Sonograms are imageMorphs that will repeatedly plot arrays of values as black on white columns moving to the right in time and scrolling left as necessary.
extent:
Do nothing; my extent is determined by my image Form.
extent:minVal:maxVal:scrollDelta:
plotColumn:
scroll
SoundBuffer
SoundBuffers store 16 bit unsigned quantities.
asByteArray
Answer a ByteArray containing my sample data serialized in most-significant byte first order.
at:
Return the 16-bit integer value at the given index of the receiver.
at:put:
Store the given 16-bit integer at the given index in the receiver.
averageEvery:from:upTo:
bytesPerElement
Number of bytes in each item. This multiplied by (self size)*8 gives the number of bits stored.
downSampledLowPassFiltering:
Answer a new SoundBuffer half the size of the receiver consisting of every other sample. If doFiltering is true, a simple low-pass filter is applied to avoid aliasing of high frequencies. Assume that receiver is monophonic.
extractLeftChannel
Answer a new SoundBuffer half the size of the receiver consisting of only the left channel of the receiver, which is assumed to contain stereo sound data.
extractRightChannel
Answer a new SoundBuffer half the size of the receiver consisting of only the right channel of the receiver, which is assumed to contain stereo sound data.
fromArray:
fromByteArray:
indexOfFirstSampleOver:
Return the index of the first sample whose absolute value is over the given threshold value. Return an index one greater than my size if no sample is over the threshold.
indexOfLastSampleOver:
Return the index of the last sample whose absolute value is over the given threshold value. Return zero if no sample is over the threshold.
initialize
Subclasses should redefine this method to perform initializations on instance creation
lowPassFiltered
Answer a simple low-pass filtered copy of this buffer. Assume it is monophonic.
mergeStereo
Answer a new SoundBuffer half the size of the receiver that mixes the left and right stereo channels of the receiver, which is assumed to contain stereo sound data.
monoSampleCount
Return the number of monaural 16-bit samples that fit into this SoundBuffer.
new:
newMonoSampleCount:
newStereoSampleCount:
normalized:
Increase my amplitudes so that the highest peak is the given percent of full volume. For example 's normalized: 50' would normalize to half of full volume.
primFill:
Fill the receiver, an indexable bytes or words object, with the given positive integer. The range of possible fill values is [0..255] for byte arrays and [0..(2^32 - 1)] for word arrays.
restoreEndianness
This word object was just read in from a stream. It was stored in Big Endian (Mac) format. Swap each pair of bytes (16-bit word), if the current machine is Little Endian.
Why is this the right thing to do? We are using memory as a byteStream. High and low bytes are reversed in each 16-bit word, but the stream of words ascends through memory. Different from a Bitmap.
reverseEndianness
Swap the bytes of each 16-bit word, using a fast BitBlt hack.
saveAsAIFFFileSamplingRate:on:
Store this mono sound buffer in AIFF file format with the given sampling rate on the given stream.
sineTable
size
Return the number of 16-bit sound samples that fit in this sound buffer. To avoid confusion, it is better to get the size of SoundBuffer using monoSampleCount or stereoSampleCount.
splitStereo
Answer an array of two SoundBuffers half the size of the receiver consisting of the left and right channels of the receiver (which is assumed to contain stereo sound data).
startUp
startUpFrom:
stereoSampleCount
Return the number of stereo slices that fit into this SoundBuffer. A stereo 'slice' consists of two 16-bit samples, one for each channel.
storeExtendedFloat:on:
Store an Apple extended-precision 80-bit floating point number on the given stream.
trimmedThreshold:
writeOnGZIPByteStream:
We only intend this for non-pointer arrays. Do nothing if I contain pointers.
SoundCodec
I am an abstract class that describes the protocol for sound codecs. Each codec (the name stems from "COder/DECoder") describes a particular algorithm for compressing and decompressing sound data. Most sound codecs are called 'lossy' because they lose information; the decompressed sound data is not exactly the same as the original data.
bytesPerEncodedFrame
Answer the number of bytes required to hold one frame of compressed sound data. Answer zero if this codec produces encoded frames of variable size.
compressAndDecompress:
Compress and decompress the given sound. Useful for testing.
compressSound:
Compress the entirety of the given sound with this codec. Answer a CompressedSoundData.
compressSound:atRate:
Compress the entirety of the given sound with this codec. Answer a CompressedSoundData.
decodeCompressedData:
Decode the entirety of the given encoded data buffer with this codec. Answer a monophonic SoundBuffer containing the uncompressed samples.
decodeFrames:from:at:into:at:
Decode the given number of monophonic frames starting at the given index in the given ByteArray of compressed sound data and storing the decoded samples into the given SoundBuffer starting at the given destination index. Answer a pair containing the number of bytes of compressed data consumed and the number of decompressed samples produced.
decompressSound:
Decompress the entirety of the given compressed sound with this codec and answer the resulting sound.
encodeFrames:from:at:into:at:
Encode the given number of frames starting at the given index in the given monophonic SoundBuffer and storing the encoded sound data into the given ByteArray starting at the given destination index. Encode only as many complete frames as will fit into the destination. Answer a pair containing the number of samples consumed and the number of bytes of compressed data produced.
encodeSoundBuffer:
Encode the entirety of the given monophonic SoundBuffer with this codec. Answer a ByteArray containing the compressed sound data.
frameCount:
Compute the frame count for this byteArray. This default computation will have to be overridden by codecs with variable frame sizes.
reset
Reset my encoding and decoding state. Optional. This default implementation does nothing.
samplesPerFrame
Answer the number of sound samples per compression frame.
SoundInputStream
This subclass of SoundRecorder supports real-time processing of incoming sound data. The sound input process queues raw sound buffers, allowing them to be read and processed by the client as they become available. A semaphore is used to synchronize between the record process and the client process. Since sound data is buffered, the client process may lag behind the input process without losing data.
allocateBuffer
Allocate a new buffer and reset nextIndex. This message is sent by the sound input process.
bufferCount
Answer the number of sound buffers that have been queued.
bufferSize
bufferSize:
Set the sound buffer size. Buffers of this size will be queued for the client to process.
emitBuffer:
Queue a buffer for later processing. This message is sent by the sound input process.
initialize
SoundRecorder new
isRecording
Answer true if the sound input process is running.
nextBufferOrNil
Answer the next input buffer or nil if no buffer is available.
startRecording
Start the sound input process.
stopRecording
Turn off the sound input process and close the driver.
SoundPlayer
A SoundPlayer is xxxxxxxxx.
Instance Variables
boinkPitch:dur:loudness:waveTable:pan:
boinkScale
bufferMSecs
canStartPlayer
initialize
Subclasses should redefine this method to perform initializations on instance creation
isAllSilence:size:
isPlaying:
isReverbOn
lastPlayBuffer
oldStylePlayLoop
pauseSound:
playLoop
playSound:
playTestSample:pan:
playerProcess
primSoundAvailableBytes
primSoundGetVolume
primSoundInsertSamples:from:samplesOfLeadTime:
primSoundPlaySamples:from:startingAt:
primSoundSetVolumeLeft:volumeRight:
primSoundStartBufferSize:rate:stereo:
primSoundStartBufferSize:rate:stereo:semaIndex:
primSoundStop
resumePlaying:
resumePlaying:quickStart:
reverbState
samplingRate
setVolumeLeft:volumeRight:
shutDown
sineTable:
soundVolume
startPlayerProcessBufferSize:rate:stereo:
startPlayerProcessBufferSize:rate:stereo:sound:
startPlayingImmediately:
startReverb
startUp
startUpWithSound:
stereo
stopPlayerProcess
stopPlayingAll
stopReverb
useLastBuffer
useLastBuffer:
useShortBuffer
waitUntilDonePlaying:
SoundRecorder
A SoundRecorder is xxxxxxxxx.
Instance Variables
bufferAvailableSema: <Object>
codec: <Object>
currentBuffer: <Object>
desiredSampleRate: <Object>
meterLevel: <Object>
meteringBuffer: <Object>
nextIndex: <Object>
paused: <Object>
recordLevel: <Object>
recordProcess: <Object>
recordedBuffers: <Object>
recordedSound: <Object>
samplingRate: <Object>
soundPlaying: <Object>
stereo: <Object>
bufferAvailableSema
- xxxxx
codec
- xxxxx
currentBuffer
- xxxxx
desiredSampleRate
- xxxxx
meterLevel
- xxxxx
meteringBuffer
- xxxxx
nextIndex
- xxxxx
paused
- xxxxx
recordLevel
- xxxxx
recordProcess
- xxxxx
recordedBuffers
- xxxxx
recordedSound
- xxxxx
samplingRate
- xxxxx
soundPlaying
- xxxxx
stereo
- xxxxx
allocateBuffer
Allocate a new buffer and reset nextIndex.
anyActive
canRecordWhilePlaying
clearRecordedSound
Clear the sound recorded thus far. Go into pause mode if currently recording.
codec:
condensedSamples
Return a single SoundBuffer that is the contatenation of all my recorded buffers.
condensedStereoSound
Decompose my buffers into left and right channels and return a mixed sound consisting of the those two channels. This may be take a while, since the data must be copied into new buffers.
copyFrom:to:normalize:dcOffset:
Return a new SoundBuffer containing the samples in the given range.
copyTo:from:to:from:startingAt:normalize:dcOffset:
Copy samples from buf to resultBuf removing the DC offset and normalizing their volume in the process.
desiredSampleRate:
use of this method indicates a strong desire for the specified rate, even if
the OS/hardware are not cooperative
emitBuffer:
since some sound recording devices cannot (or will not) record below a certain sample rate,
trim the samples down if the user really wanted fewer samples
emitPartialBuffer
endPlace
firstSampleOverThreshold:dcOffset:startingAt:
Beginning at startPlace, this routine will return the first place at which a sample exceeds the given threshold.
hasRecordedSound
Answer whether the receiver currently has any recorded sound
initialize
SoundRecorder new
initializeRecordingState
isPaused
Return true if recording is paused.
meterFrom:count:in:
Update the meter level with the maximum signal level in the given range of the given buffer.
meterLevel
Return the meter level, an integer in the range [0..100] where zero is silence and 100 represents the maximum signal level possible without clipping.
normalizeFactorFor:min:max:dcOffset:
Return a normalization factor for the range of sample values and DC offset. A normalization factor is a fixed-point number that will be divided by 1000 after multiplication with each sample value.
pause
Go into pause mode. The record level continues to be updated, but no sound is recorded.
place:plus:
Return the place that is nSamples (may be negative) beyond thisPlace.
playback
Playback the sound that has been recorded.
primGetActualRecordingSampleRate
Return the actual sample rate being used for recording. This primitive fails unless sound recording is currently in progress.
primRecordSamplesInto:startingAt:
Record a sequence of 16-bit sound samples into the given array starting at the given sample index. Return the number of samples recorded, which may be zero if no samples are currently available.
primSetRecordLevel:
Set the desired recording level to the given value in the range 0-1000, where 0 is the lowest recording level and 1000 is the maximum. Do nothing if the sound input hardware does not support changing the recording level.
primStartRecordingDesiredSampleRate:stereo:semaIndex:
Start sound recording with the given stereo setting. Use a sampling rate as close to the desired rate as the underlying platform will support. If the given semaphore index is > 0, it is taken to be the index of a Semaphore in the external objects array to be signalled every time a recording buffer is filled.
primStopRecording
Stop sound recording. Does nothing if recording is not currently in progress. Do not fail if plugin is not available
recordLevel
recordLevel:
Set the desired recording level to the given value in the range 0.0 to 1.0, where 0.0 is the lowest recording level and 1.0 is the maximum. Do nothing if the sound input hardware does not support changing the recording level.
recordLoop
Record process loop that records samples.
recordedSound
Return the sound that was recorded.
resumeRecording
Continue recording from the point at which it was last paused.
samplesPerFrame
Can be overridden to quantize buffer size for, eg, fixed-frame codecs
samplingRate
samplingRate:
scanForEndThreshold:dcOffset:minLull:startingAt:
Beginning at startPlace, this routine will find the last sound that exceeds threshold, such that if you look lull samples later you will not find another sound over threshold within the following block of lull samples.
Return the place that is lull samples beyond to that last sound.
If no end of sound is found, return endPlace.
scanForStartThreshold:dcOffset:minDur:startingAt:
Beginning at startPlace, this routine will find the first sound that exceeds threshold, such that if you look duration samples later you will find another sound over threshold within the following block of duration samples.
Return the place that is duration samples prior to that first sound.
If no sound is found, return endPlace.
segmentsAbove:normalizedVolume:
Break the current recording up into a sequence of sound segments separated by silences.
soundSegments
startRecording
Turn of the sound input driver and start the recording process. Initially, recording is paused.
stopRecording
Stop the recording process and turn of the sound input driver.
suppressSilence
trim:normalizedVolume:
Remove the leading and trailing parts of this recording that are below the given threshold. Remove any DC offset and scale the recording so that its peaks are the given percent of the maximum volume.
verifyExistenceOfRecordedSound
If the receiver has a recorded sound, answer true; if not, put up an informer and answer false
StreamingMonoSound
I implement a streaming player for monophonic Sun (.au) and AIFF (.aif) audio files.
Example of use:
(StreamingMonoSound onFileNamed: 'song.aif') play.
closeFile
Close my stream, if it responds to close.
createMixer
Create a mixed sound consisting of sampled sounds with one sound buffer's worth of samples.
currentSampleIndex
Answer the index of the current sample.
duration
Answer the duration of this sound in seconds.
extractFrom:to:
Extract a portion of this sound between the given start and end times. The current implementation only works if the sound is uncompressed.
initStream:headerStart:
Initialize for streaming from the given stream. The audio file header starts at the given stream position.
loadBuffer:compressedSampleCount:
Load the given sound buffer from the compressed sample stream.
loadBuffer:uncompressedSampleCount:
Load the given sound buffer from the uncompressed sample stream.
loadBuffersForSampleCount:
Load the sound buffers from the stream.
loadFromLeftovers:sampleCount:
Load the given sound buffer from the samples leftover from the last frame. Answer the number of samples loaded, which typically is less than sampleCount.
millisecondsSinceStart
Answer the number of milliseconds of this sound started playing.
onFileNamed:
onFileNamed:headerStart:
playSampleCount:into:startingAt:
Mix the next n samples of this sound into the given buffer starting at the given index
positionCodecTo:
Position to the closest frame before the given sample index when using a codec. If using the ADPCM codec, try to ensure that it is in sync with the compressed sample stream.
readAIFFHeader
Read an AIFF file header from stream.
readHeader
Read the sound file header from my stream.
readSunAudioHeader
Read a Sun audio file header from my stream.
repeat
Answer the repeat flag.
repeat:
Set the repeat flag. If true, this sound will loop back to the beginning when it gets to the end.
reset
Reset my internal state for a replay. Methods that override this method should do super reset.
samplesRemaining
Answer the number of samples remaining to be played.
saveAsFileNamed:compressionType:
Store this sound in a new file with the given name using the given compression type. Useful for converting between compression formats.
soundPosition
Answer the relative position of sound playback as a number between 0.0 and 1.0.
soundPosition:
Jump to the position the given fraction through the sound file. The argument is a number between 0.0 and 1.0.
startOver
Jump back to the first sample.
storeSunAudioOn:compressionType:
Store myself on the given stream as a monophonic sound compressed with the given type of compression. The sampling rate is reduced to 22050 samples/second if it is higher.
streamSamplingRate
Answer the sampling rate of the MP3 stream.
volume
Answer my volume.
volume:
Set my volume to the given number between 0.0 and 1.0.
SunAudioFileWriter
I encode monophonic sampled sounds in Sun audio (.au) file format. Sun audio files have a very simple format but can store both compressed and uncompressed sample data. I can write this format either directly into a file or onto any writable binary stream.
appendBytes:
Append the given sample data to my stream.
appendSamples:
Append the given SoundBuffer to my stream.
closeFile
Update the Sun audio file header to reflect the final size of the sound data. If my stream is a file stream, close it and, on a Macintosh, set the file type and creator to that used by SoundApp for Sun Audio files. (This does nothing on other platforms.)
codecForFormatCode:
ensureOpen
Ensure that my stream is open.
formatCodeForCompressionType:
onFileNamed:
onStream:
setStream:
Initialize myself for writing on the given stream.
storeSampledSound:onFileNamed:compressionType:
updateHeaderDataSize
Update the Sun audio file header to reflect the final size of the sound data.
writeHeaderSamplingRate:
Write a Sun audio file header for 16-bit linear format.
writeHeaderSamplingRate:format:
Write a Sun audio file header for the given sampling rate and format. Currently, only monophonic files are supported.
TempoEvent
Represents a tempo change in a MIDI score.
isTempoEvent
printOn:
Append to the argument, aStream, a sequence of characters that
identifies the receiver.
tempo
tempo:
UnloadedSound
Instances of me, which are really just FMSounds, are used placeholders for sounds that have been unloaded from this image but which may be re-loaded later.
default
VolumeEnvelope
A VolumeEnvelope is xxxxxxxxx.
Instance Variables
currentVol: <Object>
mSecsForChange: <Object>
targetVol: <Object>
currentVol
- xxxxx
mSecsForChange
- xxxxx
targetVol
- xxxxx
computeSlopeAtMSecs:
Private! Find the next inflection point of this envelope and compute its target volume and the number of milliseconds until the inflection point is reached.
reset
Reset the state for this envelope.
updateSelector
Needed by the envelope editor.
updateTargetAt:
Update the volume envelope slope and limit for my target. Answer false.
volume:
Set the maximum volume of a volume-controlling envelope.
WaveletCodec
The Wavelet codec performs a wavelet transform on the original data. It then achieves its compression by thresholding the transformed data, converting all values below a given magnitude to zero, and then run-coding the resulting data. The run-coding provides automatic variable compression depending on the parameters chosen.
As is, this codec achieves reasonable reproduction at 10:1 compression, although the quality from the GSMCodec is definitely better. I feel that the quality would be comparable if uLaw scaling were introduced prior to thresholding.
The nice thing about using wavelets is there are numerous factors to play with for better performance:
nLevels - the "order" of the transform performed
alpha and beta - these specify the wavelet shape (some are better for speech)
the actual threshold used
By simply changing these parameters, one can easily vary the compression achieved from 5:1 to 50:1, and listen to the quality at each step.
The specific format for an encoded buffer is as follows:
4 bytes: frameCount.
4 bytes: samplesPerFrame.
4 bytes: nLevels.
4 bytes: alpha asIEEE32BitWord.
4 bytes: beta asIEEE32BitWord.
frameCount occurrences of...
2 bytes: frameSize in bytes, not including these 2
may be = 0 for complete silence, meaning no scale even.
4 bytes: scale asIEEE32BitWord.
A series of 1- or 2-byte values encoded as follows:
0-111: a run of N+1 consecutive 0's;
112-127: a run of (N-112)*256 + nextByte + 1 consecutive 0's;
128-255: a 15-bit signed value = (N*256 + nextByte) - 32768 - 16384.
bytesPerEncodedFrame
Answer the number of bytes required to hold one frame of compressed sound data. Answer zero if this codec produces encoded frames of variable size.
decodeFrames:from:at:into:at:
Decode the given number of monophonic frames starting at the given index in the given ByteArray of compressed sound data and storing the decoded samples into the given SoundBuffer starting at the given destination index. Answer a pair containing the number of bytes of compressed data consumed and the number of decompressed samples produced.
encodeFrames:from:at:into:at:
Encode the given number of frames starting at the given index in the given monophonic SoundBuffer and storing the encoded sound data into the given ByteArray starting at the given destination index. Encode only as many complete frames as will fit into the destination. Answer a pair containing the number of samples consumed and the number of bytes of compressed data produced.
frameCount:
Compute the frame count for this byteArray. This default computation will have to be overridden by codecs with variable frame sizes.
samplesPerFrame
Answer the number of sound samples per compression frame.