2017-02-04 10:00:50 -05:00
|
|
|
|
2017-02-18 13:49:59 -05:00
|
|
|
<figure>
|
|
|
|
<img src="/images/rhythm-ferret.png" alt="The Rhythm Ferret window">
|
|
|
|
<figcaption>
|
|
|
|
The Rhythm Ferret window
|
|
|
|
</figcaption>
|
|
|
|
</figure>
|
2017-02-04 10:00:50 -05:00
|
|
|
|
2017-02-18 13:49:59 -05:00
|
|
|
<p>
|
|
|
|
The Rhythm Ferret is a dedicated tool to speed up the usually labor intensive
|
2017-02-21 11:06:49 -05:00
|
|
|
task of slicing and adjusting a sound region to match a specific time grid. It is
|
|
|
|
especially useful for drum tracks, either to match a different tempo, or to
|
2017-02-18 13:49:59 -05:00
|
|
|
adjust a slightly out of tempo performance.
|
|
|
|
</p>
|
|
|
|
<p>
|
|
|
|
It is not limited to this use though, as it supports both percussive and note
|
|
|
|
type detection, and can be used on melodic material too.
|
|
|
|
</p>
|
|
|
|
|
|
|
|
<h2>Accessing the Rhythm Ferret</h2>
|
|
|
|
|
|
|
|
<p>
|
2017-04-07 11:03:28 -04:00
|
|
|
The Rhythm Ferret window can be accessed by <kbd class="mouse">right</kbd> clicking
|
2017-02-18 13:49:59 -05:00
|
|
|
any audio region, then <kbd class="menu"><em>Name_Of_The_Region</em> > Edit
|
|
|
|
> Rhythm Ferret</kbd>.
|
|
|
|
</p>
|
|
|
|
<p>
|
|
|
|
Once the window is open, selecting any region will make it the focus of the
|
|
|
|
Rhythm Ferret's detection, hence allowing to process multiple regions sequentially
|
|
|
|
without reopening the window each time.
|
|
|
|
</p>
|
|
|
|
<p>
|
|
|
|
The window itself is made of:
|
|
|
|
</p>
|
|
|
|
<ul>
|
|
|
|
<li>a "mode" selection</li>
|
|
|
|
<li>some parameters for this mode</li>
|
|
|
|
<li>an operation selection, that for now only allows to <kbd class="menu">Split
|
|
|
|
regions</kbd>.</li>
|
|
|
|
</ul>
|
|
|
|
|
|
|
|
|
|
|
|
<h2>The "Mode" selection</h2>
|
|
|
|
|
|
|
|
<p>
|
|
|
|
As the Rhythm Ferret is able to detect both percussive hits and melodic notes,
|
|
|
|
it is important to choose the best suited mode for the considered material,
|
|
|
|
so that Ardour can perform the detection with the greatest accuracy :
|
|
|
|
</p>
|
|
|
|
<ul>
|
|
|
|
<li><dfn>Percussive Onset</dfn> will detect the start of each hit based
|
2017-02-21 11:06:49 -05:00
|
|
|
on the sudden change in energy (= volume) of the waveform</li>
|
2017-02-18 13:49:59 -05:00
|
|
|
<li><dfn>Note Onset</dfn> will detect the start of each note based on the changes
|
|
|
|
in the frequency domain.</li>
|
|
|
|
</ul>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
<h2>The Percussive Onset mode</h2>
|
|
|
|
|
|
|
|
<p>
|
|
|
|
In this mode, only two parameters are active:
|
|
|
|
</p>
|
2017-03-14 12:43:24 -04:00
|
|
|
<table class="dl">
|
|
|
|
<tr><th><dfn>Sensitivity</dfn> (%)</th><td>The proportion of the samples that must exceed the
|
2017-02-18 13:49:59 -05:00
|
|
|
energy rise threshold in order for an onset to be detected (at frames in which
|
|
|
|
the detection function peaks). This roughly corresponds to how "noisy" a percussive
|
2017-03-14 12:43:24 -04:00
|
|
|
sound must be in order to be detected.</td></tr>
|
|
|
|
<tr><th><dfn>Cut Pos Threshold</dfn> (dB)</th><td>The rise in energy amongst a group of samples
|
2017-02-18 13:49:59 -05:00
|
|
|
that is required for that to be counted toward the detection function's count.
|
|
|
|
This roughly corresponds to how "loud" a percussive sound must be in order to
|
2017-03-14 12:43:24 -04:00
|
|
|
be detected.</td></tr>
|
|
|
|
</table>
|
2017-02-18 13:49:59 -05:00
|
|
|
<p>
|
|
|
|
As those parameters are very material-related, there is no recipe for a perfect
|
|
|
|
match, and a good peak detection is a matter of adjusting those two parameters
|
|
|
|
by trial and error, and trying using the <kbd class="menu">Analyze</kbd> button
|
|
|
|
after each try.
|
|
|
|
</p>
|
|
|
|
<p>
|
|
|
|
Vertical grey markers will appear on the selected region, showing where Ardour
|
|
|
|
detects onsets as per the parameters. This markers can be manually adjusted, see
|
2017-02-14 10:20:06 -05:00
|
|
|
below.
|
2017-02-18 13:49:59 -05:00
|
|
|
</p>
|
|
|
|
|
|
|
|
<h2>The Note Onset Mode</h2>
|
|
|
|
|
|
|
|
<p>
|
|
|
|
In the Note Onset mode, more parameters are active:
|
|
|
|
</p>
|
2017-03-14 12:43:24 -04:00
|
|
|
<table class="dl">
|
|
|
|
<tr><th><dfn>Detection function</dfn></th><td>The method used to detect note changes. More on
|
2017-02-14 10:20:06 -05:00
|
|
|
this below.</td></tr>
|
2017-03-14 12:43:24 -04:00
|
|
|
<tr><th><dfn>Trigger gap (postproc)</dfn> (ms)</th><td>Set the minimum inter-onset interval,
|
2017-02-18 13:49:59 -05:00
|
|
|
in milliseconds, i.e. the shortest interval between two consecutive onsets.
|
2017-03-14 12:43:24 -04:00
|
|
|
</td></tr>
|
|
|
|
<tr><th><dfn>Peak threshold</dfn></th><td>Set the threshold value for the onset peak picking.
|
2017-02-18 13:49:59 -05:00
|
|
|
Lower threshold values imply more onsets detected. Increasing this threshold
|
2017-03-14 12:43:24 -04:00
|
|
|
should reduce the number of incorrect detections.</td></tr>
|
|
|
|
<tr><th><dfn>Silence threshold</dfn> (dB)</th><td>Set the silence threshold, in dB, under which
|
2017-02-18 13:49:59 -05:00
|
|
|
the onset will not be detected. A value of -20.0 would eliminate most onsets
|
2017-03-14 12:43:24 -04:00
|
|
|
but the loudest ones. A value of -90.0 would select all onsets.</td></tr>
|
|
|
|
</table>
|
2017-02-18 13:49:59 -05:00
|
|
|
|
|
|
|
<p>
|
|
|
|
The Detection function, used in Note Onset mode to choose the mathematical strategy
|
|
|
|
used to detect the note changes, is user-selectable:
|
|
|
|
</p>
|
2017-03-14 12:43:24 -04:00
|
|
|
<table class="dl">
|
|
|
|
<tr><th><dfn>Energy based</dfn></th><td>This function calculates the local energy of the input
|
|
|
|
spectral frame</td></tr>
|
|
|
|
<tr><th><dfn>Spectral Difference</dfn></th><td>Spectral difference onset detection function
|
2017-02-14 10:20:06 -05:00
|
|
|
based on Jonathan Foote and Shingo Uchihashi's "The beat spectrum: a new
|
2017-03-14 12:43:24 -04:00
|
|
|
approach to rhythm analysis" (2001)</td></tr>
|
|
|
|
<tr><th><dfn>High-Frequency Content</dfn></th><td> This method computes the High Frequency
|
2017-02-18 13:49:59 -05:00
|
|
|
Content (HFC) of the input spectral frame. The resulting function is efficient
|
|
|
|
at detecting percussive onsets. Based on Paul Masri's "Computer modeling
|
2017-03-14 12:43:24 -04:00
|
|
|
of Sound for Transformation and Synthesis of Musical Signal" (1996)</td></tr>
|
|
|
|
<tr><th><dfn>Complex Domain</dfn></th><td>This function uses information both in frequency and
|
2017-02-18 13:49:59 -05:00
|
|
|
in phase to determine changes in the spectral content that might correspond
|
|
|
|
to musical onsets. It is best suited for complex signals such as polyphonic
|
2017-03-14 12:43:24 -04:00
|
|
|
recordings.</td></tr>
|
|
|
|
<tr><th><dfn>Phase Deviation</dfn></th><td>This function uses information both energy and in
|
|
|
|
phase to determine musical onsets.</td></tr>
|
|
|
|
<tr><th><dfn>Kullback-Liebler</dfn></th><td>Kulback-Liebler onset detection function based on
|
2017-02-18 13:49:59 -05:00
|
|
|
Stephen Hainsworth and Malcom Macleod's "Onset detection in music audio
|
2017-03-14 12:43:24 -04:00
|
|
|
signals" (2003)</td></tr>
|
|
|
|
<tr><th><dfn>Modified Kullback-Liebler</dfn></th><td>Modified Kulback-Liebler onset detection
|
2017-02-18 13:49:59 -05:00
|
|
|
function based on Paul Brossier's "Automatic annotation of musical audio for
|
2017-03-14 12:43:24 -04:00
|
|
|
interactive systems" (2006)</td></tr>
|
|
|
|
</table>
|
2017-02-18 13:49:59 -05:00
|
|
|
|
|
|
|
<p>
|
2017-02-21 11:06:49 -05:00
|
|
|
Ardour defaults to Complex Domain, which usually gives good result for harmonic
|
2017-02-18 13:49:59 -05:00
|
|
|
material.
|
|
|
|
</p>
|
|
|
|
|
|
|
|
<h2>Manual adjustment</h2>
|
|
|
|
|
|
|
|
<figure>
|
|
|
|
<img src="/images/rhythm-ferret-demo-a.png" alt="The Rhythm Ferret: analysing">
|
|
|
|
<img src="/images/rhythm-ferret-demo-b.png" alt="The Rhythm Ferret: Splitting">
|
|
|
|
<img src="/images/rhythm-ferret-demo-c.png" alt="The Rhythm Ferret: Snapping to grid">
|
|
|
|
<figcaption>
|
|
|
|
The Rhythm Ferret: Analyzing, Splitting regions, and snapping to grid
|
|
|
|
</figcaption>
|
|
|
|
</figure>
|
|
|
|
|
|
|
|
<p>
|
|
|
|
Using the Rhythm Ferret consists usually in finding the right parameters to
|
|
|
|
split the audio, by adjusting them and clicking the
|
|
|
|
<kbd class="menu">Analyze</kbd> button. Each time an analysis is run, Ardour
|
|
|
|
erases the previous results, and creates grey markers on the region according
|
|
|
|
to the parameters. Those markers can be manually dragged with the
|
|
|
|
<kbd class="mouse">LEFT</kbd> mouse button to adjust their positions.
|
|
|
|
</p>
|
|
|
|
<p>
|
|
|
|
Once the markers are suitably placed, the second button in the down hand side
|
|
|
|
of the Rhythm Ferret window allows to <kbd class="menu">Apply</kbd> the operation.
|
|
|
|
At the moment of writing, only the <kdb class="menu">Split Region</kdb> is
|
|
|
|
available, which will split the region at the markers.
|
|
|
|
</p>
|
|
|
|
<p>
|
|
|
|
Those regions can then be manually aligned, or have their sync points set to
|
|
|
|
the closest grid (as per the <a href="@@grid-controls">Grid settings</a> in
|
|
|
|
effect), by selecting all the regions, and using the
|
2017-04-07 11:03:28 -04:00
|
|
|
<kbd class="mouse">right</kbd> click then <kbd class="menu">Selected Regions > Position >
|
|
|
|
Snap position to grid</kbd>.
|
2017-02-18 13:49:59 -05:00
|
|
|
</p>
|
2017-11-10 19:47:11 -05:00
|
|
|
|