This is mostly a simple lexical search+replace but the absence of operator< for
std::weak_ptr<T> leads to some complications, particularly with Evoral::Sequence
and ExportPortChannel.
There already is a shaded coverage frame indicating
if a layer is audible. This leads to a more consistent view.
In addition changing layered mode now has to update the
colors (set_frame_color).
In the presence of tempo-changes distinguishing between offsets and
absolute positions is signficant. It is only valid to convert absolute
times using the tempo-map
Furthermore since GUI zoom-factor is time-invariant (samples per pixel),
all GUI operations must explictly use samples (or timecnt). It is not
valid (and problematic) to use use a location dependent timepos.
Found via `codespell -q 3 -S *.po,./share/patchfiles,./libs -L ba,buss,busses,doubleclick,hsi,ontop,ro,seh,siz,sord,sur,te,trough,ue`
Follow-up to 364f2f078
::model_changed() is used when the model has changed (eg. new notes or some
notes deleted); ::view_changed() is used when only some view parameter (e.g.
zoom, scroll, track height etc) has been altered.
Not fully functional yet (::view_chanted() ignores scroll)
Copyright-holder and year information is extracted from git log.
git history begins in 2005. So (C) from 1998..2005 is lost. Also some
(C) assignment of commits where the committer didn't use --author.
Generated by tools/f2s. Some hand-editing will be required in a few places to fix up comments related to timecode
and video in order to keep the legible
- wysiwyg (during drag) when dragging more than one note across
a tempo change.
- introduces a muscal equivalent of snap_delta (only used for
note drags atm)
- split earliest note in selection into a separate function
- MRV::copy_selection() returns the equivalent _primary note
to avoid offset hell.
- RV::snap_frame_to_frame returns a MusicFrame
- prevent note drag moving before region start.
- for those not in the know, this series provides a way to
remove the temporal distortion introduced when using an
audio frame-based gui for music-locked objects.
In short, the gui uses an audio frame representation to move
objects. It displays the object using frame_at_beat(), quantizing
the time value to audio frames. This is fine until the user selects
that frame but expects it to be interpreted as a beat.
Thus beat_at_frame() would not produce the user-expected beat
(temporal quantization error of up to 0.5 audio samples).
This is one method of mapping audio time to music time accurately.
- use exact beats to determine frame position.
- see comments in tempo.cc for more.
- this hasn't been done for split yet, but dragging and
trimming are supported.