Discussion:
[yoshimi-user] Midi learn
louis cherel
2015-07-09 08:50:47 UTC
Permalink
Hi everyone,

I don't know if you remember me but I have been following this mailing list
for a while now.

In 2011, Alessandro Preziosi (licnep) started working on the midi learn
functionnality, to ease the use of midi controls for every knob on the
interface (right click>midi learn), and to permit the live modification of
a sound, while it is playing, like the LFO frequency.
Unfortunately, his code was not stable enough to be merged to the main
branch.

I am currently working on a new version of this feature, that i think has
been (for my personal use) a real lack of yoshimi/zyn.
For now, I have integrated Alessandro's work on the new version of yoshimi
and I am reworking it to be less of a hack and more of a feature.

I think I will be able to release a working version in late august.

Since Yoshimi is still in active development, I thought it could be
interresting to notice you of this. Moreover, maybe have i missed some
infos where you say you are working on this more or less, so I woudn't
interfere with your work.

Have a good day,
Louis Cherel
Rob Couto
2015-07-11 05:38:08 UTC
Permalink
Hello Louis and everyone,
Post by louis cherel
Hi everyone,
In 2011, Alessandro Preziosi (licnep) started working on the midi learn
functionnality, to ease the use of midi controls for every knob on the
interface (right click>midi learn), and to permit the live modification of
a sound, while it is playing, like the LFO frequency.
Unfortunately, his code was not stable enough to be merged to the main
branch.
I am currently working on a new version of this feature, that i think has
been (for my personal use) a real lack of yoshimi/zyn.
For now, I have integrated Alessandro's work on the new version of yoshimi
and I am reworking it to be less of a hack and more of a feature.
I think I will be able to release a working version in late august.
Since Yoshimi is still in active development, I thought it could be
interresting to notice you of this. Moreover, maybe have i missed some
infos where you say you are working on this more or less, so I woudn't
interfere with your work.
I'm glad you asked, even though now a grace period seems to have ended
for me and I have to follow suit :) You didn't miss anything that I
know of, but mainly because until now I haven't announced that I'm
currently working on separating the GUI and SynthEngine to make them
completely agnostic of each others' code. That's just in preparation
for the real fun part-- I intend to work out a new UI or two, to
arrive in a new compile-time option. But since I still need to hook
the UI back to the synth through some kind of wrapper, I realized that
MIDI learning would best be done in the same place at the same time,
so that any UI implementation will work with it and without a lot of
duplicated logic. It also needs controllers that are saved in state or
in parameter files to keep working when they are loaded on the command
line along with --no-gui or when Yoshimi is compiled with
UserInterface=none. ("none" is strictly headless mode, for people who
want or need a build that depends on no toolkit at all, such as an
embedded installation or an external sound engine e.g. for a game.)

At the same time, I believe the thread synchronization can be improved
to banish some more xruns, because the UI and synth won't share
significant data structures and modify each others'-- only messages
about how to update themselves. Andrew's work with asynchronous GUI
messaging seems a good pattern to follow. Even though its operation
currently depends on FLTK, it's not terribly difficult to make it work
both ways and have variations of it that work for all threads-- just a
lot (some hundreds) of small modifications. Ultimately I mean to make
the JACK callback genuinely realtime-safe by replacing the locks.

Now, the reason that is slightly off-topic even though it's the same
*kind* of topic: Is someone else working near these things and would
like *me* to not step on *their* toes? I wanted to get the UI
separation finished and push it somewhere, before I tried to be the
one that would start a conversation, but it couldn't wait any longer.
I've recently pestered Will about this and if I understand correctly,
Andrew already has something similar in mind if not in progress. I
started planning on all that because I have time and I feel like
cooking a whole new UI and this is a good point to pay off or even
cancel some old technical debts. Of course making another UI is
creating more technical debt-- essentially all future user interface
changes would require more work, no matter if each UI is easier to
work on. Thoughts? Comments? If you want to wait (as I intended to)
until I have anything to show, that's quite all right.

Completely back on topic: Thanks, Louis, for checking in. Here is my
position: my position may be irrelevant :) I've been just a casual
tester & minor guerilla contributor with random bits of insight on
occasion. I haven't looked deep into Alessandro's code recently but I
actually planned to recycle at least the Midi Controllers window in
order to get the MIDI learn part of the UI wrapper ready to go with
the FLTK GUI first, and using that to make sure the synth-side
controllers were working well. Only then would I start working out
another entire UI, while anyone who wanted to could test-drive the
intermediate result. For what it's worth, I plan to borrow Calf's GUI
engine because IMO it is Pretty Sweet(TM), copy the current general
layout of Yoshimi, and set up a default theme. Afterward I want to
come up with another, something based on ncurses-- which is why I keep
saying UI and not always GUI.

I believe that whether this matters to you largely depends on whether
my plans matter to anyone else, so that's how the topics meet-- I
wasn't really trying to hijack your thread! Anyway I'm fairly sure
that most of what I'm doing is basically incompatible with most of
Alessadro's work. I looked at it before and adapted it to 1.0.0 but I
was one who said long ago that it would need a lot of work to be
finished, and today I'm preparing to wrap up that effort within a
limited redesign. So, I hope to have an alternate MIDI-learning UI to
play with by the end of this year, and any reusable parts of his
MIDI-learning safely adapted in the original UI before then. Still,
nobody is required to be impressed by this mere summary, and the
Yoshimi team isn't required to pull what I push, and my plans might
not affect you at all. Have to let the seasoned robot fighters speak
up :)

Good luck, have fun...
--
Rob
Thomas Mitterfellner
2015-07-11 09:21:54 UTC
Permalink
Hello there!
Post by Rob Couto
nobody is required to be impressed by this mere summary,
Even though I'm not required to, I must say: I'm impressed! If you
really succeed with what you sketched here (not that I doubt you or your
skills, but is sounds like an _awful_ lot of work!) this will make
yoshimi a modern and flexible killer application!
Post by Rob Couto
For what it's worth, I plan to borrow Calf's GUI
engine because IMO it is Pretty Sweet(TM), copy the current general
layout of Yoshimi, and set up a default theme.
I hadn't thought about Calf and I must admit that I only know it from
Ardour *plugins*, but now that you mention it, I remember that it a)
looks great, b) has a good usability, c) is scalable, and d) has the
widgets required for yoshimi. But I think it's linux-only, right? Have
you thought about using a versatile cross-platform toolkit like qt and
qwt (http://qwt.sourceforge.net/index.html)?

Anyway, kudos to you and good luck with this great endeavor!

Greetings,

Thomas
Will Godfrey
2015-07-11 16:45:54 UTC
Permalink
Hi all,

I'm absoluitely delighted by all the support Yoshimi is getting these days.
it's probably just as well I don't have a webcam, so you're spared the sight of
my pathetic attempt at break dancing :)

While out today getting some fresh air I had a good long think.

First of all for all the known internal short-comings Yoshimi is in fact
remarkably stable (although I've just been informed of a slightly obscure crash
situation which I'll be looking into myself).

The things I know need sorting are:

1 Midi Learn
2 Gui Separation
3 Command Line Interface
4 Common Interface / API
5 Alsa Audio

The top 4 are quite closely interlinked, and 4 itself it pretty funadmantal.
Number 5 is pretty much a side issue as most people use jack for audio.

Something I've mentioned privately to a few people, but now think should be
aired publicly is my reason for wanting command line access. It is because I
am very keen to make Yoshimi accessible to blind musicians. I've known one or
two in the past and they struggle mightily with most music software currently
available.


The proposals we have on the table all seem to overlap to varying degrees,
However;

Andrew's work is mostly focussed on LV2.

Rob's ideas are a root & branch operation - rather exciting but very invasive.

Louis' Midi Learn upgrade looks like it could be implemented relatively
quickly.

The problem is, how to make best use of peoples time, without wasted
duplicated effort. So here's some ideas to knock about.


Midi Learn is becoming almost an embarrassment, we've had people knock on the
door for over 4 years, and most other systems have it implemented - although
not as nicely as Alessandro's original (from a user's point of view) so I'd
like to ask Louis to carry on with that against the current master. As well as
filling an important hole, what we learn from that can also be applied to a
later 'gold' version.

Meanwhile, if Rob tackles the deeper structural work on a different branch we
can do a switchero later, while always maintaining a stable version for our
users. I don't know if this can be done in modular phases. It would be nice if
it could. That would reduce the divergence while work was going on.

With this low-level work I would be inclined to suggest some discussion between
Rob and Andrew.

At the same time I can continue to tackle odd-ball issues. Things like vector
control are just about fully baked now, but there's that alsa problem and
various gui/usability stuff - it seems to be what I'm best at :o

How does that all sound?
1 Practical
2 Possible
3 Difficult
4 Not Likely
5 You're 'avin a larf mate!

P.S.

Talking about vector control, Harry has been looking at the idea of a gui
section for setting it up as an alternative to NRPNs. I don't know how far he's
been able to get with this.


P.P.S

I now think putting all the NRPN stuff in MusicIO was a mistake and am
considering moving it to SynthEngine. That would take out a lot of
indirection/pointers and put it in the same logical area as all the other
control stuff.
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Rob Couto
2015-07-13 15:54:09 UTC
Permalink
Hi again,
Post by Will Godfrey
Midi Learn is becoming almost an embarrassment, we've had people knock on the
door for over 4 years, and most other systems have it implemented - although
not as nicely as Alessandro's original (from a user's point of view) so I'd
like to ask Louis to carry on with that against the current master. As well as
filling an important hole, what we learn from that can also be applied to a
later 'gold' version.
Meanwhile, if Rob tackles the deeper structural work on a different branch we
can do a switchero later, while always maintaining a stable version for our
users. I don't know if this can be done in modular phases. It would be nice if
it could. That would reduce the divergence while work was going on.
With this low-level work I would be inclined to suggest some discussion between
Rob and Andrew.
At the same time I can continue to tackle odd-ball issues. Things like vector
control are just about fully baked now, but there's that alsa problem and
various gui/usability stuff - it seems to be what I'm best at :o
How does that all sound?
That sounds mostly agreeable, and I think after this we should
continue over on yoshimi-devel. The thing that worries me: invasive
changes in the background followed by a big merge is what I figured
could happen someday, but then it means Louis would have volunteered
for some work which gets really used for only a short time-- unless of
course during that time I get hit by some harsh difficulty (or a bus).
The other thing that worries me is Alessandro's patch: I still expect
to borrow his UI but having looked again, I'm afraid I must say that
the underpinnings of it don't seem properly simple enough to base
things on. Therefore it's not ideal for me & Louis to work on
different things together and this topic is like a real-life imitation
of a mutex. I had to speak up, precisely because I wouldn't want
anyone to waste any more time on The Knot unless they were already
fixing it. I'll keep working on a branch in any case-- I have enough
reasons, and extra.

However: the modifications to LFO, Envelope, Filter, etc. to make them
smoothly track GUI control changes in realtime could be done now, more
or less orthogonally to any deep modifications. Last week I pestered
Will about that, too :)

About phases-- definitely yes. I had something more specific to say
but the phases have changed a bit while my so-called strategy is
becoming one that can be done sooner and involves less overhead.
Whatever I think I'm doing needs to be simpler than what I started to
describe :P

'til later...
--
Rob
Rob Couto
2015-07-13 19:30:04 UTC
Permalink
And, hello again... I didn't forget, I was just out of time
Post by Thomas Mitterfellner
Even though I'm not required to, I must say: I'm impressed! If you
really succeed with what you sketched here (not that I doubt you or your
skills, but is sounds like an _awful_ lot of work!) this will make
yoshimi a modern and flexible killer application!
I certainly hope so :)
Post by Thomas Mitterfellner
But I think it's linux-only, right? Have
you thought about using a versatile cross-platform toolkit like qt and
qwt (http://qwt.sourceforge.net/index.html)?
The last time I checked, Yoshimi itself was Linux-only. Yoshimi
people, is there any intention of changing that? I'm not sure but I
think at least pthreads would have to be swapped out for something
portable for Windows but not OSX, and if there's no ALSA elsewhere
that means using either JACK or more glue. If it happens, Calf's GUI
is built on top of GTK and that's fairly cross-platform, mostly
versatile. As for Qt, yes I thought it would make a rather good one
but I set that aside, at least for now.
Post by Thomas Mitterfellner
Anyway, kudos to you and good luck with this great endeavor!
Thanks!
--
Rob
Kristian Amlie
2015-07-13 20:04:57 UTC
Permalink
Post by Rob Couto
The last time I checked, Yoshimi itself was Linux-only. Yoshimi
people, is there any intention of changing that? I'm not sure but I
think at least pthreads would have to be swapped out for something
portable for Windows
FYI, there is a pthreads replacement for Windows:

https://www.sourceware.org/pthreads-win32/
--
Kristian
Will Godfrey
2015-07-13 22:24:13 UTC
Permalink
On Mon, 13 Jul 2015 15:30:04 -0400
Post by Rob Couto
The last time I checked, Yoshimi itself was Linux-only. Yoshimi
people, is there any intention of changing that?
Personally I have no knowledge of programming anything except Linux. However, I
can see potential benefits of making Yoshimi cross-platform, and would have no
objection at all if someone wanted to get involved in this.

What I don't want to happen though is for actual development to get dragged
down in the mire of platform incompatibilities. One of the first things Cal did
was to strip out the existing windows code in order to get a clear view.
Apparently, it eliminated a lot of obscure bugs.
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Tom
2015-07-14 07:32:37 UTC
Permalink
Post by Will Godfrey
On Mon, 13 Jul 2015 15:30:04 -0400
Post by Rob Couto
The last time I checked, Yoshimi itself was Linux-only. Yoshimi
people, is there any intention of changing that?
Personally I have no knowledge of programming anything except Linux. However, I
can see potential benefits of making Yoshimi cross-platform, and would have no
objection at all if someone wanted to get involved in this.
What I don't want to happen though is for actual development to get dragged
down in the mire of platform incompatibilities. One of the first things Cal did
was to strip out the existing windows code in order to get a clear view.
Apparently, it eliminated a lot of obscure bugs.
I used to run linux programs on windows using Cygwin which used to be
pretty good. Might be worth a try for those stuck on windows.
Tom
louis cherel
2015-07-29 16:38:41 UTC
Permalink
Hi again everyone,

I have been quiet the last days but I worked on a *proof of concept* of my
nearly entirely rewritten midi learn functionnality.

So there it is: https://github.com/Musinux/yoshimi/tree/midilearn
After compiling it:
User Manual:
1) shift + right click on LFO knob (freq, depth, start...) of any part of
the software
2) click on "Midi Learn"
3) move the knob of your hardware controller, or send the midi signal you
want to control the knob with
4) have fun

Currently, you can:
* Change any LFO param on real time, so yes, now when you do change a
knob, even if it's with the mouse and you are playing a note, the note is
affected directly.
--> the best to try is addsynth, frequency lfo freq and depth to move,
the sounds are really smooth
* Midi control any LFO param

It does modify the knob as you move your controller.

Currently you CANNOT:
* change any other parameter
* SAVE your midi learning configuration

The design pattern of development has been so that the midi controlling
feature and the UI are totally separated,
but it has been in such a way that there is no latency between the
controller signal and the UI change.

Features to come:
* Saving the states of midi learn
* applying the system to any knob of the interface
* midi feedback for motorized controllers ( bcf2000...)

I would be really pleased of any bug report (for the few functionnalities
yet) you could produce or gui improvement you could think of (only for the
midi learn functionality, of course).

Have a nice day,

Louis
Post by Will Godfrey
On Mon, 13 Jul 2015 15:30:04 -0400
Post by Rob Couto
The last time I checked, Yoshimi itself was Linux-only. Yoshimi
people, is there any intention of changing that?
Personally I have no knowledge of programming anything except Linux. However, I
can see potential benefits of making Yoshimi cross-platform, and would have no
objection at all if someone wanted to get involved in this.
What I don't want to happen though is for actual development to get dragged
down in the mire of platform incompatibilities. One of the first things Cal did
was to strip out the existing windows code in order to get a clear view.
Apparently, it eliminated a lot of obscure bugs.
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
------------------------------------------------------------------------------
Don't Limit Your Business. Reach for the Cloud.
GigeNET's Cloud Solutions provide you with the tools and support that
you need to offload your IT needs and focus on growing your business.
Configured For All Businesses. Start Your Cloud Today.
https://www.gigenetcloud.com/
_______________________________________________
yoshimi-user mailing list
https://lists.sourceforge.net/lists/listinfo/yoshimi-user
Will J Godfrey
2015-08-02 17:29:20 UTC
Permalink
On Wed, 29 Jul 2015 16:38:41 +0000
Post by louis cherel
Hi again everyone,
I have been quiet the last days but I worked on a *proof of concept* of my
nearly entirely rewritten midi learn functionnality.
So there it is: https://github.com/Musinux/yoshimi/tree/midilearn
1) shift + right click on LFO knob (freq, depth, start...) of any part of
the software
2) click on "Midi Learn"
3) move the knob of your hardware controller, or send the midi signal you
want to control the knob with
4) have fun
* Change any LFO param on real time, so yes, now when you do change a
knob, even if it's with the mouse and you are playing a note, the note is
affected directly.
--> the best to try is addsynth, frequency lfo freq and depth to move,
the sounds are really smooth
* Midi control any LFO param
It does modify the knob as you move your controller.
* change any other parameter
* SAVE your midi learning configuration
The design pattern of development has been so that the midi controlling
feature and the UI are totally separated,
but it has been in such a way that there is no latency between the
controller signal and the UI change.
* Saving the states of midi learn
* applying the system to any knob of the interface
* midi feedback for motorized controllers ( bcf2000...)
I would be really pleased of any bug report (for the few functionnalities
yet) you could produce or gui improvement you could think of (only for the
midi learn functionality, of course).
Have a nice day,
Louis
I've only just got back from my holiday. I'll have a look at this as soon as I
can find time.
--
It wasn't me! (Well actually, it probably was)

... the hard part is not dodging what life throws at you,
but trying to catch the good bits.

------------------------------------------------------------------------------
oli_kester
2015-08-04 14:27:24 UTC
Permalink
Post by louis cherel
Hi again everyone,
I have been quiet the last days but I worked on a proof of concept of my nearly entirely rewritten midi learn functionnality.
So there it is: https://github.com/Musinux/yoshimi/tree/midilearn
1) shift + right click on LFO knob (freq, depth, start...) of any part of the software
2) click on "Midi Learn"
3) move the knob of your hardware controller, or send the midi signal you want to control the knob with
4) have fun
Hi Louis

Thanks for taking this on, it's gonna be a serious step up for Yoshimi as a performance tool! :)

I can't get it to work for me though - I cloned your "M" version of 1.3.5 and have it open, but Shift+Right clicking on knobs inside ADSynth does not appear to do anything different to the standard version. It just pops up the tooltip.

Did I have to specify something at compilation / runtime?

Best of luck,

Oliver



------------------------------------------------------------------------------
oli_kester
2015-08-04 15:21:53 UTC
Permalink
Hi Louis

It works! Thanks for the quick response, and a lesson in GitHub :)

Straight away I got some nice smooth resonance sounds with the LFO that I could never have got before. Felt compelled to do a quick bounce and upload -

https://soundcloud.com/juventaechasma/ghost-strings-midilearn/s-j0WVx

I feel like this will really unleash the synthesis inside Yoshi with minimal mouse pointer fiddling :)

Best,

Oliver
Just another thing: there are still some bugs, so you should consider saving the files produced with this version separately, to avoid any loss in your previous work :-)
Have a good day,
Louis
Hi Oli,
Thanks, i really hope that it will be useful for you !
The version you should have is 1.3.5 Midilearn, M standing for Master.
You have to download the midilearn branch to have the new fonctionnalities.
Normally, there is no compilation option to specify.
If you have any other problem to install it, dont hesitate to contact me again.
Cheers,
Louis Cherel
Post by louis cherel
Hi again everyone,
I have been quiet the last days but I worked on a proof of concept of my nearly entirely rewritten midi learn functionnality.
So there it is: https://github.com/Musinux/yoshimi/tree/midilearn
1) shift + right click on LFO knob (freq, depth, start...) of any part of the software
2) click on "Midi Learn"
3) move the knob of your hardware controller, or send the midi signal you want to control the knob with
4) have fun
Hi Louis
Thanks for taking this on, it's gonna be a serious step up for Yoshimi as a performance tool! :)
I can't get it to work for me though - I cloned your "M" version of 1.3.5 and have it open, but Shift+Right clicking on knobs inside ADSynth does not appear to do anything different to the standard version. It just pops up the tooltip.
Did I have to specify something at compilation / runtime?
Best of luck,
Oliver
------------------------------------------------------------------------------
louis cherel
2015-08-04 15:51:57 UTC
Permalink
Great news!

I really felt the same. Yoshimi has been lacking this functionality for
sooo long !
Let's try the instrument joined. Playing with the parameters of freq LFO,
filter LFO, amp LFO make it sound really awesome. (playing with the
feedback of the 2 effects is impressive too)

Enjoy !

Louis
Post by oli_kester
Hi Louis
It works! Thanks for the quick response, and a lesson in GitHub :)
Straight away I got some nice smooth resonance sounds with the LFO that I
could never have got before. Felt compelled to do a quick bounce and upload
-
https://soundcloud.com/juventaechasma/ghost-strings-midilearn/s-j0WVx
I feel like this will really unleash the synthesis inside Yoshi with
minimal mouse pointer fiddling :)
Best,
Oliver
---- On Tue, 04 Aug 2015 15:07:47 +0000 louis cherel<
Just another thing: there are still some bugs, so you should consider
saving the files produced with this version separately, to avoid any loss
in your previous work :-)
Have a good day,
Louis
Hi Oli,
Thanks, i really hope that it will be useful for you !
The version you should have is 1.3.5 Midilearn, M standing for Master.
You have to download the midilearn branch to have the new
fonctionnalities.
Normally, there is no compilation option to specify.
If you have any other problem to install it, dont hesitate to contact
me again.
Cheers,
Louis Cherel
---- On Wed, 29 Jul 2015 16:38:41 +0000 louis cherel<
Post by louis cherel
Hi again everyone,
I have been quiet the last days but I worked on a proof of concept
of my nearly entirely rewritten midi learn functionnality.
Post by louis cherel
So there it is: https://github.com/Musinux/yoshimi/tree/midilearn
1) shift + right click on LFO knob (freq, depth, start...) of any
part of the software
Post by louis cherel
2) click on "Midi Learn"
3) move the knob of your hardware controller, or send the midi
signal you want to control the knob with
Post by louis cherel
4) have fun
Hi Louis
Thanks for taking this on, it's gonna be a serious step up for Yoshimi
as a performance tool! :)
I can't get it to work for me though - I cloned your "M" version of
1.3.5 and have it open, but Shift+Right clicking on knobs inside ADSynth
does not appear to do anything different to the standard version. It just
pops up the tooltip.
Did I have to specify something at compilation / runtime?
Best of luck,
Oliver
Loading...