HTML preprocessors can make writing HTML more powerful or convenient. For instance, Markdown is designed to be easier to write and read for text documents and you could write a loop in Pug.
In CodePen, whatever you write in the HTML editor is what goes within the <body>
tags in a basic HTML5 template. So you don't have access to higher-up elements like the <html>
tag. If you want to add classes there that can affect the whole document, this is the place to do it.
In CodePen, whatever you write in the HTML editor is what goes within the <body>
tags in a basic HTML5 template. If you need things in the <head>
of the document, put that code here.
The resource you are linking to is using the 'http' protocol, which may not work when the browser is using https.
CSS preprocessors help make authoring CSS easier. All of them offer things like variables and mixins to provide convenient abstractions.
It's a common practice to apply CSS to a page that styles elements such that they are consistent across all browsers. We offer two of the most popular choices: normalize.css and a reset. Or, choose Neither and nothing will be applied.
To get the best cross-browser support, it is a common practice to apply vendor prefixes to CSS properties and values that require them to work. For instance -webkit-
or -moz-
.
We offer two popular choices: Autoprefixer (which processes your CSS server-side) and -prefix-free (which applies prefixes via a script, client-side).
Any URLs added here will be added as <link>
s in order, and before the CSS in the editor. You can use the CSS from another Pen by using its URL and the proper URL extension.
You can apply CSS to your Pen from any stylesheet on the web. Just put a URL to it here and we'll apply it, in the order you have them, before the CSS in the Pen itself.
You can also link to another Pen here (use the .css
URL Extension) and we'll pull the CSS from that Pen and include it. If it's using a matching preprocessor, use the appropriate URL Extension and we'll combine the code before preprocessing, so you can use the linked Pen as a true dependency.
JavaScript preprocessors can help make authoring JavaScript easier and more convenient.
Babel includes JSX processing.
Any URL's added here will be added as <script>
s in order, and run before the JavaScript in the editor. You can use the URL of any other Pen and it will include the JavaScript from that Pen.
You can apply a script from anywhere on the web to your Pen. Just put a URL to it here and we'll add it, in the order you have them, before the JavaScript in the Pen itself.
If the script you link to has the file extension of a preprocessor, we'll attempt to process it before applying.
You can also link to another Pen here, and we'll pull the JavaScript from that Pen and include it. If it's using a matching preprocessor, we'll combine the code before preprocessing, so you can use the linked Pen as a true dependency.
Search for and use JavaScript packages from npm here. By selecting a package, an import
statement will be added to the top of the JavaScript editor for this package.
Using packages here is powered by esm.sh, which makes packages from npm not only available on a CDN, but prepares them for native JavaScript ESM usage.
All packages are different, so refer to their docs for how they work.
If you're using React / ReactDOM, make sure to turn on Babel for the JSX processing.
If active, Pens will autosave every 30 seconds after being saved once.
If enabled, the preview panel updates automatically as you code. If disabled, use the "Run" button to update.
If enabled, your code will be formatted when you actively save your Pen. Note: your code becomes un-folded during formatting.
Visit your global Editor Settings.
<html ng-app="notix">
<head>
<meta charset="utf-8">
<meta name="viewport" content="initial-scale=1, maximum-scale=1, user-scalable=no, width=device-width">
<title>Notix</title>
<link href="//code.ionicframework.com/1.0.0-beta.12/css/ionic.css" rel="stylesheet">
<script src="//code.ionicframework.com/1.0.0-beta.12/js/ionic.bundle.js"></script>
</head>
<body ng-controller="MainCtrl">
<ion-pane>
<ion-header-bar class="bar-stable">
<h1 class="title">NOTIX</h1>
</ion-header-bar>
<ion-content ng-show="!audio.context" class="has-header padding">
<div class="card">
<i class="icon ion-sad"></i> Your device doesn't support web audio API.
</div>
</ion-content>
<ion-content ng-show="audio.context" class="has-header padding">
<div class="card">
<div class="button button-block selectFile button-calm" ng-click="goFullScreen()" file-input="file" accept="audio/*" on-change="readFile()"></div>
</div>
<div class="card left" ng-show="state.status > enums.state.init">
<div class="item item-text-wrap">
{{file.name | limitTo: 40 }} ({{ getDuration(state.duration) | date: 'mm:ss' }})
</div>
<div class="range range-dark">
<i class="icon ion-play" ng-click="toggleState()" ng-class="{'ion-play': state.status === enums.state.ready, 'ion-pause': state.status === enums.state.playing}"></i>
{{ getDuration((audio.filter.sourcePosition || 0) / (audio.buffer.sampleRate || 1)) | date: 'mm:ss' }}
<input type="range" name="progress" ng-model="state.progress" min="0" max="{{state.duration}}" step="0.01" ng-change="seekTo(state.progress)" />
{{ getDuration(state.duration) | date: 'mm:ss' }}
<i class="icon ion-stop" ng-click="stopPlaying()"></i>
</div>
<div class="range range-assertive">
<i class="icon ion-volume-low"></i>
<input type="range" name="volume" ng-model="audio.gainNode.gain.value" min="0" max="1" step="0.05" />
<i class="icon ion-volume-high"></i>
</div>
<div class="range range-positive">
<i class="icon ion-ios7-rewind"></i>
<input type="range" name="tempo" ng-model="state.tempo" min="{{options.defaults.tempo - options.range.tempo}}" max="{{options.defaults.tempo + options.range.tempo}}" step="5" ng-change="setPitch(state.pitch, state.tempo)" />
<i class="icon ion-ios7-fastforward"></i>
</div>
<div class="range range-royal">
<i class="icon ion-arrow-graph-down-right"></i>
<input type="range" name="pitch" ng-model="state.pitch" min="{{-options.range.pitch}}" max="{{options.range.pitch}}" step="0.5" ng-change="setPitch(state.pitch, state.tempo)" />
<i class="icon ion-arrow-graph-up-right"></i>
</div>
</div>
<div class="card left" ng-show="state.status > enums.state.init">
<div class="item item-input">
<i class="icon ion-android-volume"></i>
<span class="input-label">Volume (%): </span>
{{audio.gainNode.gain.value.toFixed(1) * 100}}
</div>
<div class="item item-input">
<i class="icon ion-ios7-speedometer-outline"></i>
<span class="input-label">Playback Rate (%):</span>
<input type="text" ng-model="state.tempo" min="50" max="150" />
</div>
<div class="item item-input">
<i class="icon ion-arrow-graph-up-right"></i>
<span class="input-label">Pitch(Semitones):</span>
<input type="text" ng-model="state.pitch" min="-7" max="7" />
</div>
<div track-progress progress="state.progress" total="state.duration" on-change="seekTo">
</div>
<div class="item">
<button class="button button-block icon-left button-assertive" ng-click="resetParams()"><i class="icon ion-ios7-refresh-outline"></i> Reset</button>
</div>
</div>
<div class="card" ng-show="state.status === enums.state.init">
<div class="item item-text-wrap text-center">
Select a file, and change the tone and tempo for your needs. <br/>
Please make sure you are using the latest Chrome browser!
</div>
</div>
</ion-content>
</ion-pane>
<script type="text/ng-template" id="trackProgress.html">
<div class="item item-input">
<span class="input-label">
<i class="icon ion-ios7-stopwatch-outline"></i>
Start from:
</span>
<input type='text' ng-model='time' placeholder="00:00" style="padding-right: 0; max-width: 3.5em;" />
<button class="button-calm button" ng-click="setProgress(time)" style="margin-left: 60px;"><i class="icon ion-play"></i></button>
</div>
</script>
</body>
</html>
/* Empty. Add your own CSS if you like */
.selectFile {
color: transparent !important;
width: 100% !important;
}
.selectFile:hover {
color: transparent;
}
.selectFile::-webkit-file-upload-button {
visibility: hidden;
}
.selectFile::before {
content: 'Select file';
display: inline-block;
color: white;
outline: none;
white-space: nowrap;
-webkit-user-select: none;
cursor: pointer;
text-align: center;
position: absolute;
left: -2px;
top: 5px;
width: 100%;
}
html, body {
}
.card.left {
float: left;
min-width: calc(50% - 10px);
max-width: 480px;
margin-right: 10px;
};
// Ionic Starter App
// angular.module is a global place for creating, registering and retrieving Angular modules
// 'starter' is the name of this angular module example (also set in a <body> attribute in index.html)
// the 2nd parameter is an array of 'requires'
angular.module('notix', ['ionic', 'notix.controllers', 'notix.services', 'notix.directives']).constant('options', {
range: {
pitch: 7,
tempo: 50
},
defaults: {
pitch: 0,
tempo: 100
},
consts: {
pitchExp: 0.69314718056,
EOF: 65535
}
})
.run(function($ionicPlatform) {
$ionicPlatform.ready(function() {
// Hide the accessory bar by default (remove this to show the accessory bar above the keyboard
// for form inputs)
if(window.cordova && window.cordova.plugins.Keyboard) {
cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true);
}
if(window.StatusBar) {
StatusBar.styleDefault();
}
});
})
angular.module('notix.controllers', ['notix.directives', 'notix.services'])
.constant('enums', {
state: {
init: 0,
ready: 1,
playing: 2,
paused: 3,
stopped: 4
}
})
.controller('MainCtrl', function($scope, $ionicLoading, $window, $timeout, $interval, $parse, options, enums, audioContext, soundtouch) {
function prepareTrack(buffer) {
$ionicLoading.show({
template: "Decoding file..."
});
audioContext.decodeAudioData(buffer, function(buffer) {
$scope.$apply(function() {
$scope.audio.buffer = buffer;
$scope.state.progress = 0;
$scope.state.duration = buffer.length / buffer.sampleRate;
$scope.state.status = enums.state.ready;
$ionicLoading.hide();
});
});
}
function setDefaultValues() {
$scope.state.tempo = options.defaults.tempo;
$scope.state.pitch = options.defaults.pitch;
$scope.state.progress = 0;
$scope.state.duration = 0;
}
function isPlaying() {
return $scope.state.status === enums.state.playing;
};
angular.extend($scope, {
options: options,
enums: enums,
state: {
tempo: options.defaults.tempo,
pitch: options.defaults.pitch,
duration: 0,
progress: 0,
status: enums.state.init
},
audio: {
context: audioContext,
gainNode: audioContext.createGain()
},
getDuration: function(val) {
return new Date(0,0,0,0,0,val);
},
setPitch: function(val, tempo) {
var isCurrentlyPlaying = isPlaying();
if (isCurrentlyPlaying)
$scope.toggleState();
$scope.state.pitch = Math.min(Math.max(val, -(options.range.pitch)), (options.range.pitch));
$scope.state.tempo = Math.min(Math.max(tempo, options.defaults.tempo - options.range.tempo), options.defaults.tempo + options.range.tempo);
if (isCurrentlyPlaying)
$scope.toggleState();
},
toggleState: function() {
if ($scope.state.status !== enums.state.playing) {
$scope.soundtouch = new soundtouch.SoundTouch();
$scope.audio.source = new soundtouch.WebAudioBufferSource($scope.audio.buffer);
$scope.audio.filter = new soundtouch.SimpleFilter($scope.audio.source,
$scope.soundtouch,
$scope.state.progress * $scope.audio.buffer.sampleRate);
var pitch = Math.exp(options.consts.pitchExp * $scope.state.pitch / 12);
$scope.soundtouch.pitch = pitch;
$scope.soundtouch.tempo = $scope.state.tempo / 100;
$scope.state.status = enums.state.playing;
$scope.audio.node = soundtouch.getWebAudioNode($scope.audio.context, $scope.audio.filter);
$scope.audio.node.connect($scope.audio.gainNode);
$scope.audio.gainNode.connect(audioContext.destination);
}
else
{
$scope.state.status = enums.state.ready;
$scope.audio.node.disconnect();
$scope.audio.gainNode.disconnect();
}
},
seekTo: function(seekTime) {
$scope.stopPlaying();
$scope.state.progress = seekTime;
$scope.toggleState();
},
setProgress: function(progress) {
var isCurrentlyPlaying = isPlaying();
if (isCurrentlyPlaying)
$scope.toggleState();
$scope.state.progress = progress;
if (isCurrentlyPlaying())
$scope.toggleState();
},
stopPlaying: function() {
if ($scope.audio.node && $scope.audio.gainNode) {
$scope.audio.node.disconnect();
$scope.audio.gainNode.disconnect();
}
$scope.state.status = enums.state.ready;
$scope.state.progress = 0;
},
resetParams: function() {
var isCurrentlyPlaying = isPlaying();
if (isCurrentlyPlaying)
$scope.toggleState();
$scope.setPitch(options.defaults.pitch, options.defaults.tempo);
if (isCurrentlyPlaying)
$scope.toggleState();
},
readFile: function() {
if (!$scope.file)
return;
$timeout($scope.resetParams);
$scope.state.status = enums.state.init;
$ionicLoading.show({
template: "Loading file..."
});
var fileReader = new FileReader();
fileReader.onload = function(event) {
prepareTrack(event.target.result);
};
$ionicLoading.show({
template: "Reading file..."
});
fileReader.readAsArrayBuffer($scope.file);
}
});
var progressWatch = $interval(function() {
if (isPlaying())
$scope.state.progress = Math.floor(
($scope.audio.filter ? $scope.audio.filter.sourcePosition : 0) /
($scope.audio.buffer ? $scope.audio.buffer.sampleRate:1));
if (isPlaying() && !($scope.audio.filter.sourcePosition < $scope.audio.buffer.length - options.consts.EOF))
$scope.stopPlaying();
}.bind(this), 1000)
})
angular.module('notix.directives', [])
.directive('fileInput', function ($parse) {
return {
restrict: "EA",
template: "<input type='file' ng-transclude />",
replace: true,
transclude: true,
link: function (scope, element, attrs) {
var modelGet = $parse(attrs.fileInput);
var modelSet = modelGet.assign;
var onChange = $parse(attrs.onChange);
var updateModel = function () {
scope.$apply(function () {
modelSet(scope.$parent, element[0].files[0]);
onChange(scope.$parent);
});
};
element.bind('change', updateModel);
}
};
})
.directive('trackProgress', function ($parse) {
return {
restrict: "EA",
templateUrl: "trackProgress.html",
scope: {},
replace: true,
link: function(scope, elm, attrs) {
angular.extend(scope, {
total: 0,
progress: 0,
time: 0,
setProgress: function(time) {
var seconds = 0;
if (time.indexOf(':') > 0) {
var str = time.split(':');
seconds = (Number(str[0]) * 60 + Number(str[1]));
}
else {
seconds = time;
}
scope.$parent.$eval(attrs.onChange + '(' + seconds + ')');
}
});
scope.$parent.$watch(attrs.progress, function(progress) {
scope.progress = progress;
var str = new Date(0,0,0,0,0,progress).toTimeString().split(" ")[0].split(":");
var time = (Number(str[0]) > 0)?(60 * Number(str[0]) + Number(str[1])):str[1] + ':' + str[2];
scope.time = time;
});
}
}
});
angular.module('notix.services', [])
.factory('audioContext', function($window) {
var contextClass =
$window.webkitAudioContext ||
$window.mozAudioContext ||
$window.oAudioContext ||
$window.msAudioContext;
var context = new webkitAudioContext();
return context;
})
.service('soundtouch', function() {
/*
* SoundTouch JS audio processing library
* Copyright (c) Olli Parviainen
* Copyright (c) Ryan Berdeen
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
/**
* Giving this value for the sequence length sets automatic parameter value
* according to tempo setting (recommended)
*/
var USE_AUTO_SEQUENCE_LEN = 0;
/**
* Default length of a single processing sequence, in milliseconds. This determines to how
* long sequences the original sound is chopped in the time-stretch algorithm.
*
* The larger this value is, the lesser sequences are used in processing. In principle
* a bigger value sounds better when slowing down tempo, but worse when increasing tempo
* and vice versa.
*
* Increasing this value reduces computational burden and vice versa.
*/
//var DEFAULT_SEQUENCE_MS = 130
var DEFAULT_SEQUENCE_MS = USE_AUTO_SEQUENCE_LEN;
/**
* Giving this value for the seek window length sets automatic parameter value
* according to tempo setting (recommended)
*/
var USE_AUTO_SEEKWINDOW_LEN = 0;
/**
* Seeking window default length in milliseconds for algorithm that finds the best possible
* overlapping location. This determines from how wide window the algorithm may look for an
* optimal joining location when mixing the sound sequences back together.
*
* The bigger this window setting is, the higher the possibility to find a better mixing
* position will become, but at the same time large values may cause a "drifting" artifact
* because consequent sequences will be taken at more uneven intervals.
*
* If there's a disturbing artifact that sounds as if a constant frequency was drifting
* around, try reducing this setting.
*
* Increasing this value increases computational burden and vice versa.
*/
//var DEFAULT_SEEKWINDOW_MS = 25;
var DEFAULT_SEEKWINDOW_MS = USE_AUTO_SEEKWINDOW_LEN;
/**
* Overlap length in milliseconds. When the chopped sound sequences are mixed back together,
* to form a continuous sound stream, this parameter defines over how long period the two
* consecutive sequences are let to overlap each other.
*
* This shouldn't be that critical parameter. If you reduce the DEFAULT_SEQUENCE_MS setting
* by a large amount, you might wish to try a smaller value on this.
*
* Increasing this value increases computational burden and vice versa.
*/
var DEFAULT_OVERLAP_MS = 8;
// Table for the hierarchical mixing position seeking algorithm
var _SCAN_OFFSETS = [
[ 124, 186, 248, 310, 372, 434, 496, 558, 620, 682, 744, 806,
868, 930, 992, 1054, 1116, 1178, 1240, 1302, 1364, 1426, 1488, 0],
[-100, -75, -50, -25, 25, 50, 75, 100, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ -20, -15, -10, -5, 5, 10, 15, 20, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ -4, -3, -2, -1, 1, 2, 3, 4, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]];
// Adjust tempo param according to tempo, so that variating processing sequence length is used
// at varius tempo settings, between the given low...top limits
var AUTOSEQ_TEMPO_LOW = 0.5; // auto setting low tempo range (-50%)
var AUTOSEQ_TEMPO_TOP = 2.0; // auto setting top tempo range (+100%)
// sequence-ms setting values at above low & top tempo
var AUTOSEQ_AT_MIN = 125.0;
var AUTOSEQ_AT_MAX = 50.0;
var AUTOSEQ_K = ((AUTOSEQ_AT_MAX - AUTOSEQ_AT_MIN) / (AUTOSEQ_TEMPO_TOP - AUTOSEQ_TEMPO_LOW));
var AUTOSEQ_C = (AUTOSEQ_AT_MIN - (AUTOSEQ_K) * (AUTOSEQ_TEMPO_LOW));
// seek-window-ms setting values at above low & top tempo
var AUTOSEEK_AT_MIN = 25.0;
var AUTOSEEK_AT_MAX = 15.0;
var AUTOSEEK_K = ((AUTOSEEK_AT_MAX - AUTOSEEK_AT_MIN) / (AUTOSEQ_TEMPO_TOP - AUTOSEQ_TEMPO_LOW));
var AUTOSEEK_C = (AUTOSEEK_AT_MIN - (AUTOSEEK_K) * (AUTOSEQ_TEMPO_LOW));
function extend(a,b) {
for (var i in b) {
var g = b.__lookupGetter__(i),
s = b.__lookupSetter__(i);
if (g || s) {
if (g) {
a.__defineGetter__(i, g);
}
if (s) {
a.__defineSetter__(i, s);
}
}
else {
a[i] = b[i];
}
}
return a;
}
function testFloatEqual(a, b) {
return (a > b ? a - b : b - a) > 1e-10;
}
function AbstractFifoSamplePipe(createBuffers) {
if (createBuffers) {
this.inputBuffer = new FifoSampleBuffer();
this.outputBuffer = new FifoSampleBuffer();
}
else {
this.inputBuffer = this.outputBuffer = null;
}
}
AbstractFifoSamplePipe.prototype = {
get inputBuffer() {
return this._inputBuffer;
},
set inputBuffer(inputBuffer) {
this._inputBuffer = inputBuffer;
},
get outputBuffer() {
return this._outputBuffer;
},
set outputBuffer(outputBuffer) {
this._outputBuffer = outputBuffer;
},
clear: function() {
this._inputBuffer.clear();
this._outputBuffer.clear();
}
};
function RateTransposer(createBuffers) {
AbstractFifoSamplePipe.call(this, createBuffers);
this._reset();
this.rate = 1;
}
extend(RateTransposer.prototype, AbstractFifoSamplePipe.prototype);
extend(RateTransposer.prototype, {
set rate(rate) {
this._rate = rate;
// TODO aa filter
},
_reset: function() {
this.slopeCount = 0;
this.prevSampleL = 0;
this.prevSampleR = 0;
},
process: function() {
// TODO aa filter
var numFrames = this._inputBuffer.frameCount;
this._outputBuffer.ensureAdditionalCapacity(numFrames / this._rate + 1);
var numFramesOutput = this._transpose(numFrames);
this._inputBuffer.receive();
this._outputBuffer.put(numFramesOutput);
},
_transpose: function(numFrames) {
if (numFrames === 0) {
return 0; // No work.
}
var src = this._inputBuffer.vector;
var srcOffset = this._inputBuffer.startIndex;
var dest = this._outputBuffer.vector;
var destOffset = this._outputBuffer.endIndex;
var used = 0;
var i = 0;
while (this.slopeCount < 1.0) {
dest[destOffset + 2 * i] = (1.0 - this.slopeCount) * this.prevSampleL + this.slopeCount * src[srcOffset];
dest[destOffset + 2 * i + 1] = (1.0 - this.slopeCount) * this.prevSampleR + this.slopeCount * src[srcOffset + 1];
i++;
this.slopeCount += this._rate;
}
this.slopeCount -= 1.0;
if (numFrames != 1) {
out: while (true) {
while (this.slopeCount > 1.0) {
this.slopeCount -= 1.0;
used++;
if (used >= numFrames - 1) {
break out;
}
}
var srcIndex = srcOffset + 2 * used;
dest[destOffset + 2 * i] = (1.0 - this.slopeCount) * src[srcIndex] + this.slopeCount * src[srcIndex + 2];
dest[destOffset + 2 * i + 1] = (1.0 - this.slopeCount) * src[srcIndex + 1] + this.slopeCount * src[srcIndex + 3];
i++;
this.slopeCount += this._rate;
}
}
this.prevSampleL = src[srcOffset + 2 * numFrames - 2];
this.prevSampleR = src[srcOffset + 2 * numFrames - 1];
return i;
}
});
function FifoSampleBuffer() {
this._vector = new Float32Array();
this._position = 0;
this._frameCount = 0;
}
FifoSampleBuffer.prototype = {
get vector() {
return this._vector;
},
get position() {
return this._position;
},
get startIndex() {
return this._position * 2;
},
get frameCount() {
return this._frameCount;
},
get endIndex() {
return (this._position + this._frameCount) * 2;
},
clear: function() {
this.receive(frameCount);
this.rewind();
},
put: function(numFrames) {
this._frameCount += numFrames;
},
putSamples: function(samples, position, numFrames) {
position = position || 0;
var sourceOffset = position * 2;
if (!(numFrames >= 0)) {
numFrames = (samples.length - sourceOffset) / 2;
}
var numSamples = numFrames * 2;
this.ensureCapacity(numFrames + this._frameCount);
var destOffset = this.endIndex;
this._vector.set(samples.subarray(sourceOffset, sourceOffset + numSamples), destOffset);
this._frameCount += numFrames;
},
putBuffer: function(buffer, position, numFrames) {
position = position || 0;
if (!(numFrames >= 0)) {
numFrames = buffer.frameCount - position;
}
this.putSamples(buffer.vector, buffer.position + position, numFrames);
},
receive: function(numFrames) {
if (!(numFrames >= 0) || numFrames > this._frameCount) {
numFrames = this._frameCount;
}
this._frameCount -= numFrames;
this._position += numFrames;
},
receiveSamples: function(output, numFrames) {
var numSamples = numFrames * 2;
var sourceOffset = this.startIndex;
output.set(this._vector.subarray(sourceOffset, sourceOffset + numSamples));
this.receive(numFrames);
},
extract: function(output, position, numFrames) {
var sourceOffset = this.startIndex + position * 2;
var numSamples = numFrames * 2;
output.set(this._vector.subarray(sourceOffset, sourceOffset + numSamples));
},
ensureCapacity: function(numFrames) {
var minLength = numFrames * 2;
if (this._vector.length < minLength) {
var newVector = new Float32Array(minLength);
newVector.set(this._vector.subarray(this.startIndex, this.endIndex));
this._vector = newVector;
this._position = 0;
}
else {
this.rewind();
}
},
ensureAdditionalCapacity: function(numFrames) {
this.ensureCapacity(this.frameCount + numFrames);
},
rewind: function() {
if (this._position > 0) {
this._vector.set(this._vector.subarray(this.startIndex, this.endIndex));
this._position = 0;
}
}
};
function SimpleFilter(sourceSound, pipe, position) {
this._pipe = pipe;
this.sourceSound = sourceSound;
this.historyBufferSize = 192400;
this._sourcePosition = position;
this.outputBufferPosition = 0;
this._position = position;
}
SimpleFilter.prototype = {
get pipe() {
return this._pipe;
},
get position() {
return this._position;
},
set position(position) {
if (position > this._position) {
throw new RangeError('New position may not be greater than current position');
}
var newOutputBufferPosition = this.outputBufferPosition - (this._position - position);
if (newOutputBufferPosition < 0) {
throw new RangeError('New position falls outside of history buffer');
}
this.outputBufferPosition = newOutputBufferPosition;
this._position = position;
},
get sourcePosition() {
return this._sourcePosition;
},
set sourcePosition(sourcePosition) {
this.clear();
this._sourcePosition = sourcePosition;
},
get inputBuffer() {
return this._pipe.inputBuffer;
},
get outputBuffer() {
return this._pipe.outputBuffer;
},
fillInputBuffer: function(numFrames) {
var samples = new Float32Array(numFrames * 2);
var numFramesExtracted = this.sourceSound.extract(samples, numFrames, this._sourcePosition);
this._sourcePosition += numFramesExtracted;
this.inputBuffer.putSamples(samples, 0, numFramesExtracted);
},
fillOutputBuffer: function(numFrames) {
while (this.outputBuffer.frameCount < numFrames) {
// TODO hardcoded buffer size
var numInputFrames = (8192 * 2) - this.inputBuffer.frameCount;
this.fillInputBuffer(numInputFrames);
if (this.inputBuffer.frameCount < (8192 * 2)) {
break;
// TODO flush pipe
}
this._pipe.process();
}
},
extract: function(target, numFrames) {
this.fillOutputBuffer(this.outputBufferPosition + numFrames);
var numFramesExtracted = Math.min(numFrames, this.outputBuffer.frameCount - this.outputBufferPosition);
this.outputBuffer.extract(target, this.outputBufferPosition, numFramesExtracted);
var currentFrames = this.outputBufferPosition + numFramesExtracted;
this.outputBufferPosition = Math.min(this.historyBufferSize, currentFrames);
this.outputBuffer.receive(Math.max(currentFrames - this.historyBufferSize, 0));
this._position += numFramesExtracted;
return numFramesExtracted;
},
handleSampleData: function(e) {
this.extract(e.data, 4096);
},
clear: function() {
// TODO yuck
this._pipe.clear();
this.outputBufferPosition = 0;
}
};
function Stretch(createBuffers) {
AbstractFifoSamplePipe.call(this, createBuffers);
this.bQuickSeek = true;
this.bMidBufferDirty = false;
this.pMidBuffer = null;
this.overlapLength = 0;
this.bAutoSeqSetting = true;
this.bAutoSeekSetting = true;
this._tempo = 1;
this.setParameters(44100, DEFAULT_SEQUENCE_MS, DEFAULT_SEEKWINDOW_MS, DEFAULT_OVERLAP_MS);
}
extend(Stretch.prototype, AbstractFifoSamplePipe.prototype);
extend(Stretch.prototype, {
clear: function() {
AbstractFifoSamplePipe.prototype.clear.call(this);
this._clearMidBuffer();
},
_clearMidBuffer: function() {
if (this.bMidBufferDirty) {
this.bMidBufferDirty = false;
this.pMidBuffer = null;
}
},
/**
* Sets routine control parameters. These control are certain time constants
* defining how the sound is stretched to the desired duration.
*
* 'sampleRate' = sample rate of the sound
* 'sequenceMS' = one processing sequence length in milliseconds (default = 82 ms)
* 'seekwindowMS' = seeking window length for scanning the best overlapping
* position (default = 28 ms)
* 'overlapMS' = overlapping length (default = 12 ms)
*/
setParameters: function(aSampleRate, aSequenceMS, aSeekWindowMS, aOverlapMS) {
// accept only positive parameter values - if zero or negative, use old values instead
if (aSampleRate > 0) {
this.sampleRate = aSampleRate;
}
if (aOverlapMS > 0) {
this.overlapMs = aOverlapMS;
}
if (aSequenceMS > 0) {
this.sequenceMs = aSequenceMS;
this.bAutoSeqSetting = false;
}
else {
// zero or below, use automatic setting
this.bAutoSeqSetting = true;
}
if (aSeekWindowMS > 0) {
this.seekWindowMs = aSeekWindowMS;
this.bAutoSeekSetting = false;
}
else {
// zero or below, use automatic setting
this.bAutoSeekSetting = true;
}
this.calcSeqParameters();
this.calculateOverlapLength(this.overlapMs);
// set tempo to recalculate 'sampleReq'
this.tempo = this._tempo;
},
/**
* Sets new target tempo. Normal tempo = 'SCALE', smaller values represent slower
* tempo, larger faster tempo.
*/
set tempo(newTempo) {
var intskip;
this._tempo = newTempo;
// Calculate new sequence duration
this.calcSeqParameters();
// Calculate ideal skip length (according to tempo value)
this.nominalSkip = this._tempo * (this.seekWindowLength - this.overlapLength);
this.skipFract = 0;
intskip = Math.floor(this.nominalSkip + 0.5);
// Calculate how many samples are needed in the 'inputBuffer' to
// process another batch of samples
this.sampleReq = Math.max(intskip + this.overlapLength, this.seekWindowLength) + this.seekLength;
},
get inputChunkSize() {
return this.sampleReq;
},
get outputChunkSize() {
return this.overlapLength + Math.max(0, this.seekWindowLength - 2 * this.overlapLength);
},
/**
* Calculates overlapInMsec period length in samples.
*/
calculateOverlapLength: function(overlapInMsec) {
var newOvl;
// TODO assert(overlapInMsec >= 0);
newOvl = (this.sampleRate * overlapInMsec) / 1000;
if (newOvl < 16) newOvl = 16;
// must be divisible by 8
newOvl -= newOvl % 8;
this.overlapLength = newOvl;
this.pRefMidBuffer = new Float32Array(this.overlapLength * 2);
this.pMidBuffer = new Float32Array(this.overlapLength * 2);
},
checkLimits: function(x, mi, ma) {
return (x < mi) ? mi : ((x > ma) ? ma : x);
},
/**
* Calculates processing sequence length according to tempo setting
*/
calcSeqParameters: function() {
var seq;
var seek;
if (this.bAutoSeqSetting) {
seq = AUTOSEQ_C + AUTOSEQ_K * this._tempo;
seq = this.checkLimits(seq, AUTOSEQ_AT_MAX, AUTOSEQ_AT_MIN);
this.sequenceMs = Math.floor(seq + 0.5);
}
if (this.bAutoSeekSetting) {
seek = AUTOSEEK_C + AUTOSEEK_K * this._tempo;
seek = this.checkLimits(seek, AUTOSEEK_AT_MAX, AUTOSEEK_AT_MIN);
this.seekWindowMs = Math.floor(seek + 0.5);
}
// Update seek window lengths
this.seekWindowLength = Math.floor((this.sampleRate * this.sequenceMs) / 1000);
this.seekLength = Math.floor((this.sampleRate * this.seekWindowMs) / 1000);
},
/**
* Enables/disables the quick position seeking algorithm.
*/
set quickSeek(enable) {
this.bQuickSeek = enable;
},
/**
* Seeks for the optimal overlap-mixing position.
*/
seekBestOverlapPosition: function() {
if (this.bQuickSeek) {
return this.seekBestOverlapPositionStereoQuick();
}
else {
return this.seekBestOverlapPositionStereo();
}
},
/**
* Seeks for the optimal overlap-mixing position. The 'stereo' version of the
* routine
*
* The best position is determined as the position where the two overlapped
* sample sequences are 'most alike', in terms of the highest cross-correlation
* value over the overlapping period
*/
seekBestOverlapPositionStereo: function() {
var bestOffs, bestCorr, corr, i;
// Slopes the amplitudes of the 'midBuffer' samples.
this.precalcCorrReferenceStereo();
bestCorr = Number.MIN_VALUE;
bestOffs = 0;
// Scans for the best correlation value by testing each possible position
// over the permitted range.
for (i = 0; i < this.seekLength; i++) {
// Calculates correlation value for the mixing position corresponding
// to 'i'
corr = this.calcCrossCorrStereo(2 * i, this.pRefMidBuffer);
// Checks for the highest correlation value.
if (corr > bestCorr) {
bestCorr = corr;
bestOffs = i;
}
}
return bestOffs;
},
/**
* Seeks for the optimal overlap-mixing position. The 'stereo' version of the
* routine
*
* The best position is determined as the position where the two overlapped
* sample sequences are 'most alike', in terms of the highest cross-correlation
* value over the overlapping period
*/
seekBestOverlapPositionStereoQuick: function() {
var j, bestOffs, bestCorr, corr, scanCount, corrOffset, tempOffset;
// Slopes the amplitude of the 'midBuffer' samples
this.precalcCorrReferenceStereo();
bestCorr = Number.MIN_VALUE;
bestOffs = 0;
corrOffset = 0;
tempOffset = 0;
// Scans for the best correlation value using four-pass hierarchical search.
//
// The look-up table 'scans' has hierarchical position adjusting steps.
// In first pass the routine searhes for the highest correlation with
// relatively coarse steps, then rescans the neighbourhood of the highest
// correlation with better resolution and so on.
for (scanCount = 0; scanCount < 4; scanCount++) {
j = 0;
while (_SCAN_OFFSETS[scanCount][j]) {
tempOffset = corrOffset + _SCAN_OFFSETS[scanCount][j];
if (tempOffset >= this.seekLength) {
break;
}
// Calculates correlation value for the mixing position corresponding
// to 'tempOffset'
corr = this.calcCrossCorrStereo(2 * tempOffset, this.pRefMidBuffer);
// Checks for the highest correlation value
if (corr > bestCorr) {
bestCorr = corr;
bestOffs = tempOffset;
}
j++;
}
corrOffset = bestOffs;
}
return bestOffs;
},
/**
* Slopes the amplitude of the 'midBuffer' samples so that cross correlation
* is faster to calculate
*/
precalcCorrReferenceStereo: function() {
var i, cnt2, temp;
for (i = 0; i < this.overlapLength; i++) {
temp = i * (this.overlapLength - i);
cnt2 = i * 2;
this.pRefMidBuffer[cnt2] = this.pMidBuffer[cnt2] * temp;
this.pRefMidBuffer[cnt2 + 1] = this.pMidBuffer[cnt2 + 1] * temp;
}
},
calcCrossCorrStereo: function(mixingPos, compare) {
var mixing = this._inputBuffer.vector;
mixingPos += this._inputBuffer.startIndex;
var corr, i, mixingOffset;
corr = 0;
for (i = 2; i < 2 * this.overlapLength; i += 2) {
mixingOffset = i + mixingPos;
corr += mixing[mixingOffset] * compare[i] +
mixing[mixingOffset + 1] * compare[i + 1];
}
return corr;
},
// TODO inline
/**
* Overlaps samples in 'midBuffer' with the samples in 'pInputBuffer' at position
* of 'ovlPos'.
*/
overlap: function(ovlPos) {
this.overlapStereo(2 * ovlPos);
},
/**
* Overlaps samples in 'midBuffer' with the samples in 'pInput'
*/
overlapStereo: function(pInputPos) {
var pInput = this._inputBuffer.vector;
pInputPos += this._inputBuffer.startIndex;
var pOutput = this._outputBuffer.vector,
pOutputPos = this._outputBuffer.endIndex,
i, cnt2, fTemp, fScale, fi, pInputOffset, pOutputOffset;
fScale = 1 / this.overlapLength;
for (i = 0; i < this.overlapLength; i++) {
fTemp = (this.overlapLength - i) * fScale;
fi = i * fScale;
cnt2 = 2 * i;
pInputOffset = cnt2 + pInputPos;
pOutputOffset = cnt2 + pOutputPos;
pOutput[pOutputOffset + 0] = pInput[pInputOffset + 0] * fi + this.pMidBuffer[cnt2 + 0] * fTemp;
pOutput[pOutputOffset + 1] = pInput[pInputOffset + 1] * fi + this.pMidBuffer[cnt2 + 1] * fTemp;
}
},
process: function() {
var ovlSkip, offset, temp, i;
if (this.pMidBuffer === null) {
// if midBuffer is empty, move the first samples of the input stream
// into it
if (this._inputBuffer.frameCount < this.overlapLength) {
// wait until we've got overlapLength samples
return;
}
this.pMidBuffer = new Float32Array(this.overlapLength * 2);
this._inputBuffer.receiveSamples(this.pMidBuffer, this.overlapLength);
}
var output;
// Process samples as long as there are enough samples in 'inputBuffer'
// to form a processing frame.
while (this._inputBuffer.frameCount >= this.sampleReq) {
// If tempo differs from the normal ('SCALE'), scan for the best overlapping
// position
offset = this.seekBestOverlapPosition();
// Mix the samples in the 'inputBuffer' at position of 'offset' with the
// samples in 'midBuffer' using sliding overlapping
// ... first partially overlap with the end of the previous sequence
// (that's in 'midBuffer')
this._outputBuffer.ensureAdditionalCapacity(this.overlapLength);
// FIXME unit?
//overlap(uint(offset));
this.overlap(Math.floor(offset));
this._outputBuffer.put(this.overlapLength);
// ... then copy sequence samples from 'inputBuffer' to output
temp = (this.seekWindowLength - 2 * this.overlapLength); // & 0xfffffffe;
if (temp > 0) {
this._outputBuffer.putBuffer(this._inputBuffer, offset + this.overlapLength, temp);
}
// Copies the end of the current sequence from 'inputBuffer' to
// 'midBuffer' for being mixed with the beginning of the next
// processing sequence and so on
//assert(offset + seekWindowLength <= (int)inputBuffer.numSamples());
var start = this.inputBuffer.startIndex + 2 * (offset + this.seekWindowLength - this.overlapLength);
this.pMidBuffer.set(this._inputBuffer.vector.subarray(start, start + 2 * this.overlapLength));
// Remove the processed samples from the input buffer. Update
// the difference between integer & nominal skip step to 'skipFract'
// in order to prevent the error from accumulating over time.
this.skipFract += this.nominalSkip; // real skip size
ovlSkip = Math.floor(this.skipFract); // rounded to integer skip
this.skipFract -= ovlSkip; // maintain the fraction part, i.e. real vs. integer skip
this._inputBuffer.receive(ovlSkip);
}
}
});
// https://bugs.webkit.org/show_bug.cgi?id=57295
extend(Stretch.prototype, {
get tempo() {
return this._tempo;
}
});
function SoundTouch() {
this.rateTransposer = new RateTransposer(false);
this.tdStretch = new Stretch(false);
this._inputBuffer = new FifoSampleBuffer();
this._intermediateBuffer = new FifoSampleBuffer();
this._outputBuffer = new FifoSampleBuffer();
this._rate = 0;
this._tempo = 0;
this.virtualPitch = 1.0;
this.virtualRate = 1.0;
this.virtualTempo = 1.0;
this._calculateEffectiveRateAndTempo();
}
SoundTouch.prototype = {
clear: function() {
if (typeof rateTransposer != 'undefined')
rateTransposer.clear();
if (typeof tdStretch != 'undefined')
tdStretch.clear();
},
get rate() {
return this._rate;
},
set rate(rate) {
this.virtualRate = rate;
this._calculateEffectiveRateAndTempo();
},
set rateChange(rateChange) {
this.rate = 1.0 + 0.01 * rateChange;
},
get tempo() {
return this._tempo;
},
set tempo(tempo) {
this.virtualTempo = tempo;
this._calculateEffectiveRateAndTempo();
},
set tempoChange(tempoChange) {
this.tempo = 1.0 + 0.01 * tempoChange;
},
set pitch(pitch) {
this.virtualPitch = pitch;
this._calculateEffectiveRateAndTempo();
},
set pitchOctaves(pitchOctaves) {
this.pitch = Math.exp(0.69314718056 * pitchOctaves);
this._calculateEffectiveRateAndTempo();
},
set pitchSemitones(pitchSemitones) {
this.pitchOctaves = pitchSemitones / 12.0;
},
get inputBuffer() {
return this._inputBuffer;
},
get outputBuffer() {
return this._outputBuffer;
},
_calculateEffectiveRateAndTempo: function() {
var previousTempo = this._tempo;
var previousRate = this._rate;
this._tempo = this.virtualTempo / this.virtualPitch;
this._rate = this.virtualRate * this.virtualPitch;
if (testFloatEqual(this._tempo, previousTempo)) {
this.tdStretch.tempo = this._tempo;
}
if (testFloatEqual(this._rate, previousRate)) {
this.rateTransposer.rate = this._rate;
}
if (this._rate > 1.0) {
if (this._outputBuffer != this.rateTransposer.outputBuffer) {
this.tdStretch.inputBuffer = this._inputBuffer;
this.tdStretch.outputBuffer = this._intermediateBuffer;
this.rateTransposer.inputBuffer = this._intermediateBuffer;
this.rateTransposer.outputBuffer = this._outputBuffer;
}
}
else {
if (this._outputBuffer != this.tdStretch.outputBuffer) {
this.rateTransposer.inputBuffer = this._inputBuffer;
this.rateTransposer.outputBuffer = this._intermediateBuffer;
this.tdStretch.inputBuffer = this._intermediateBuffer;
this.tdStretch.outputBuffer = this._outputBuffer;
}
}
},
process: function() {
if (this._rate > 1.0) {
this.tdStretch.process();
this.rateTransposer.process();
}
else {
this.rateTransposer.process();
this.tdStretch.process();
}
}
};
function WebAudioBufferSource(buffer) {
this.buffer = buffer;
}
WebAudioBufferSource.prototype = {
extract: function(target, numFrames, position) {
var l = this.buffer.getChannelData(0),
r = this.buffer.getChannelData(1);
for (var i = 0; i < numFrames; i++) {
target[i * 2] = l[i + position];
target[i * 2 + 1] = r[i + position];
}
return Math.min(numFrames, l.length - position);
}
};
function getWebAudioNode(context, filter) {
var BUFFER_SIZE = 16384;
var node = context.createScriptProcessor(BUFFER_SIZE, 2, 2),
samples = new Float32Array(BUFFER_SIZE * 2);
node.onaudioprocess = function(e) {
var l = e.outputBuffer.getChannelData(0),
r = e.outputBuffer.getChannelData(1);
var framesExtracted = filter.extract(samples, BUFFER_SIZE);
if (framesExtracted === 0) {
node.disconnect(); // Pause.
}
for (var i = 0; i < framesExtracted; i++) {
l[i] = samples[i * 2];
r[i] = samples[i * 2 + 1];
}
};
return node;
}
return soundtouch = {
'RateTransposer': RateTransposer,
'Stretch': Stretch,
'SimpleFilter': SimpleFilter,
'SoundTouch': SoundTouch,
'WebAudioBufferSource': WebAudioBufferSource,
'getWebAudioNode': getWebAudioNode
};
});
Also see: Tab Triggers