-
Notifications
You must be signed in to change notification settings - Fork 15
Control
class: Common response function names
#19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I would add the events from the kivy project. on_touch_down()
on_touch_move()
on_touch_up() https://kivy.org/doc/stable/guide/inputs.html. I thinks could be closer to our architecture and purposes |
I like your set of functions. I agree that we just need a few basic functions. I recommend we include a placeholder function for “no touch” or “update()”. I found a function like that is necessary if a widget is doing some kind of animation even when it’s not being interacted with. Like an animation of an icon or maybe a graph that needs to update its graphics in the background. And if a widget doesn’t need one of these methods, it can just default to the superclass’s empty |
@kmatch98 are you planning in doing a slider, meaning, a non-discrete switch button? to control PWM, voltage output, and other things like that? I am asking because:
|
I agree a slider is a great example that will need all those touch options. Go for it! |
By the way I re-use your switch_round :) |
I think we shouldn't use names with the |
Good point, I am not familiar at all with other languages callbacks so the prefix on_ does not ring any bell, maybe it could be called when_moved, when_touched? |
How about the following: Functions that send the touch position:
Function with no touch position:
|
I like them |
Moving here some discussion regarding this. Context here is a widget called Slider. Comments from @kmatch98 regarding is first review, and the way to handle their behavior: First Review@jposada202020 This is a really cool. It really responds to the finger pretty fast! Here are my comments. Sorry if it looks like a lot of things, most are minor tweaks, I think the main question is about I expected the slider should respond to touch, and that's open for discussion. graphics: The slider can go outside the background box’s range, if slid all the way to the right side. touch behavior: The slider does not “center” on the touch point. I have some touch-calibration error on my screen, but I think there is some offset in the code. touch action: For sliders, I think the touch-response should be as follows:
Implementation: If you implement the behavior above, I think you will need a state variable to track whether the slider has received a “touch_down” event. To clear the state variable, I think you will need to create a “touch_up” Control function to let the slider know when it is released. function names: I recommend changing the if p:
if my.slider.contains(p):
my_slider.touch_down(p) # let the slider know it has been touched down
my_slider.touch_move(p) # handle any touch movements if the slider is still touched down
else:
my_slider.touch_up(p) # let the slider know it has been released Another option is to add a public state variable so you can check the slider's "touch" state. That way you could selectively call touch boundary: I think that more padding should be available at the “bottom” of the slider than the top. When you touch the slider, you want to still be able to see the slider above your finger. So your finger’s touchpoint will likely be “below” the slider. I think it will be useful to have an option for non-uniform touch_padding. I suggest including some default bottom_touch_padding since that would be expected by the user. simpletest example: The slider is really small on my pyPortal. Maybe make it a bit larger. value: I think the value should default to be a float between 0 and 1.0. Or the min/max value could be set during the initialization, similar to what I think is done in the progress bar. |
Same topic from @kmatch98 regarding the handle of the touchI'm thinking about the touch-response some more and maybe what I said above needs to be updated. I don't think the simple code that I put above will work right in the case if you have multiple sliders. We need to handle a situation where one slider receives a "touch_down" and then any other sliders (or other widgets for that matter) should not be triggered with touch events until the slider gets the "touch_up". I suspect @FoamyGuy has some experience with this. Do you think that the event loop should handle which widget should have the "focus"? For example, once we first touch a slider, we should only send "touch_move" commands to slider, until the next "touch_up" (when finger is released from the screen). Should this be handled by the event loop, or should each widget need to have the right state variables to understand whether it has focus. Here's a suggestion:
This is just a general concept (of course you could develop weird widgets that don't follow this exactly). Your thoughts are welcome. |
Comments from @jposada202020@kmatch98 Interesting.... I did this in a different way in the past... as we are merging MP, I would like to see if we could use some of the new features introduced. Meaning, For me as it is, you would have an event loop, meaning, what you said, the main code will verify, but we add all the widgets to an event loop and all will get updated at the same time, instead have them async. I agree with you, this discussion is very important, not only for this, but for all the widget development, we need to establish some kind of design rules. Our purpose here will be maintainability, easy of support, easy of maintainability and the most important that for the user will be as easy as to say if widget.touch:
...do something Proposal lets copy paste this comments to the graphics team, you know I am down for whatever is easier for the user, and it would be easier in the future to add more features. I am a superfan of the text parameter keywords, because, if matplotlib taught us something is you always want to add more feature, and the enum thing restrict this, at least make it more difficult. But I understand why people like their ENUMS :) After all this discussion, could you re-post this comments in the graphics team, I could do it, but do not want to do this without your approval |
Comments from @FoamyGuyMost of the systems I have experience with use some sort of callback type system for UI related events. I'm wondering if something like that might be possible with the new features from Micropython, but I am not sure. In the absence of true callbacks. I do think having the logic in some sort of main event loop is probably the best way to go. I think What @TG-Techie has done with TG-Gui is great. Perhaps we should be working toward making our widgets compatible with this system. I'm not 100% how it handles "touch move" type actions, but I'm pretty sure it can, because I think I recall TG-Techie showing a sliding selector widget example on a stream one day. |
Comments from @jposada202020That is a good idea. I recall seeing somewhere too, maybe twitter... but yes you are right. I remember he mentions something about the event loop. Regarding the MP depending on what they decide to merge. Anyway as you said, we know now, that what TG does. works so we could build from that. |
Questions and Thoughts@jposada202020 "In the absence of true callbacks", could you elaborate please? Please excuse that I may have missed some things in reading this thread; what are the new features from micropython? I agree General Lessons from the TG-Gui
So the circuitpython implementation of a tg-gui event loop differentiates between something that is selected as a whole, Takes action on behalf of the input, and widgets that have continuous inputs of coords like sliders and scrollables. From trial and error, this felt closer to the touch experience you'd find on a modern device. (I studied an Ipad and an Iphone). There are other reasons for separating updating from selecting but, for example, picture a vertically scrolling list of buttons where touch zones fill all of the list. If the select and update functionalities are the same, the user would never be able to scroll. By separating the two it differentiates between continuous gestures and directive actions. TG-Gui Protocols/InterfacesThese are just the method names I choose, they aren't necessarily the right ones for the CircuitPython framework. The naming convention of the methods:
Framework Construction Thoughts(callbacks not included in this section) Another difference from the proposed seems that TG-Gui widgets gain control abilities through structural subtyping, like rust's traits or swift's protocols. Again this is just how I choose to do it; I judged it saved on overhead from previous versions of TG-Gui (versions 1-4 had inheritance-based control) and added more flexibility to the framework. Widgets can use any combination of the above-listed protocols just by adding the methods to the class body of the Widget subclass.
|
I hope that helps clarify? |
Thank you, for he clear explanation. When I am referring with callback, I other python implementation, while define the widget, you define at the same time what to do when is interacted with. And you do not need to control that, this is control in a higher level by the API. So things are static, until interact with. Not sure how, as I was always the user and not the constructor. But parsing to the link you provided I guess that event loop is the brains of the event operation, Thanks again |
You bring up some good points, thank you. Yes, what I linked to is the internal brains of an event loop. By linking to it I'm (implicitly?) agreeing with KMatch's note on how having a pre-defined interface for response functions "will likely make it easier to write a Main loop event handler". A distinction I failed to point out, is what I described above is not "user-facing". A person wiring an interface shouldn't need to understand or manipulate how an event loop ties into the implementations of control. I would think that should be the job of Widget and library maintainers. EX: Widgets' @kmatch98 were you proposing that all user-facing code share a common term for "on press" and the like in the constructor?
😂, no plan survives first contact with (* insert other method name) |
@jposada202020 Thanks for moving the discussion over here! I’m proposing that we come up with a few common responses that widgets will have and stick with those names. If we are inconsistent then it will take a lot of time for a user to read through all the docs to discover whether a button uses “press” and a slider uses “touched”. That will be chaos to write an event loop. Also I would like for there to be re-usable event loops so folks don’t have to always write a new event loop. For example I want to have an event loop that handles my button presses and my slider inputs. If each widget has similar “Control” functions, we can send the same commands to all the widgets, not having to know if the widget will actually do anything with that command. The widget can choose to ignore an event loop’s command if it doesn’t make sense to it. A concrete example: For a slider, once it is touched, any touch movement should change the slider value (even if the touch is moved outside the slider touch_boundary, like the slider in YouTube videos). Once the slider is touched, no other widgets should respond to any touch movements. When touch is removed the slider stays at its last value. Here’s what I propose:
Note: The event handler must keep track of the difference between the first touch (touch_down) and a touch movement. (@TG-Techie, I think this is how your TG-GUI event loop is constructed.) The event handler can check if the touch_down is within the touch_boundary of a widget (“contains”), and if so it calls the touch_down Control function for that widget. (Alternately the event handler could send “touch_down” to all widgets and they can check whether they should respond to that location of touch point. Either way is fine to me, and this second way is more consistent with the other two I propose, but contains gives the event loop more options, see *** below.) Now, the Event handler spews “touch_moved” points to all widgets. Note: Widgets have to keep track of their own state variables to know if they were “touched_down”, and whether they should respond to touch_move events. When the touch is released, the event handler spews “touch_up” commands to all widgets. Some widgets may need to do updates when nothing is being touched (like animations). I think an additional “update” function should be called in the event loop so the widget has a chance to update itself periodically. Widgets are responsible for not “hogging” the processor cycles. So I propose a widget should define and respond to (or choose to ignore) these five Control class functions:
*** Some things I haven’t specifically considered are “layers” or “views” in case some widgets should get priority over others for touch event “focus”. If we use the “contains” method, the event loop could choose to only send the “touch_down” command to the first (“top”) widget in its list that responds with “contains” is True. |
@TG-Techie One think to consider, regarding the design, we are in the middle of two worlds.. And finally yes, the user will always find a way to do the only thing you did not think of :) |
@kmatch98 , I'm not quite sure what you mean by handler; I would guess it is the widget being interacted with is the "handler"?
Are you suggesting
I think that's a great approach, especially for the embedded space!
👍, I very much agree. Have you heard of the design principle called progressive disclosure?
TG-Gui (on circuitpython) doesn't currently have the widget being interacted check if the coordinates are contained within it, but it really should😅. However, TG-Gui is structured so the underlying implementation can change but the user-facing API doesn't. |
+1, I very much agree. Have you heard of the design principle called progressive disclosure? Not a clue, but I think with this post I will progressively learn 💻 about it. 😄 |
To clarify, I mean that the main while loop has to detect and then send commands always in the touch_down -> touch_moved -> touch_ up sequence.
My proposed
If this means: Let users know the basics they absolutely need to know up front but hide complexity until they want to do something totally specialized. If so, then I’m all for it. |
With the advent of new widgets such as
icon_widget
, a clear need has arisen for common function naming schemes for theControl
class. The objective is so that the main event handler can use a common approach for calling multiple widgets to respond to various input events.Of course, widgets aren't required to conform to any pre-defined response functions, but it will likely make it easier to write a Main loop event handler if widgets have the same function naming and a common expected effect.
Here's one example. Looking at the PyGUI, it defines responses to four mouse events:
This issue is raised to gather suggestions for the basic response functions to use in the
Control
class.Let's discuss a few basic functions that make sense in a touch environment on small microcontroller screens. The objective is not to collect 100% of cases, but to capture the few things that would be most used, and then grow from there.
The text was updated successfully, but these errors were encountered: