-
Posts
2,396 -
Joined
-
Last visited
Content Type
Profiles
Forums
Gallery
Everything posted by gyenesvi
-
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
Skip it if you don't want to read it. You have a choice. And let others discuss on a discussion forum ;) I knew there would be a misunderstanding somewhere :) In my system, you'd only have to score the entries you like, 8-10 in this case and ignore the rest. The ones that don't excite you automatically get a score of 0, without mentioning. In case of the top 6 ranking, while you select the 6 out of the 8-10 ones you like, you are effectively scoring or comparing those ones in some way. That's why scoring the ones you like seems easier than ranking the final 6, because to arrive at the final 6, you probably already had to consider all the ones you like. And my system simplifies the opposite case as well: if you only have 3 contenders to start with, you only have to score those 3, and don't need to think hard to pick another 3 uninteresting models to get a total of 6. Does that sound simpler? -
[APP] BrickController2
gyenesvi replied to imurvai's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
It is not that something has been changed in BC2 app, but rather the contrary, that it has not yet been changed. Everybody was having such issues with both the Buwizz app and the BC2 app. The problem originated from the Buwizz FW itself, previously not limiting the amount of current and shutting the Buwizz down when pushing a model too hard (too fast/heavy for the motors). But recently the Buwizz FW was updated and now current limits can be set, and the Buwizz app now sets them properly, so now if you use that, it does not shut down. However, the BC2 app has not yet been modified, and so it does not set the limits, so the shutdown issue is still there. It would be nice if this was fixed. @imurvai, could this be fixed some time? It does not seem like a lot of work (calling a current limit setter at startup), but it would help a lot for many people. Alternatively, the Buwizz app is also going to have gamepad controller support soon, so that will also be an option.- 1,316 replies
-
- sbrick
- game controller
-
(and 8 more)
Tagged with:
-
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
That's an interesting categorization, I've heard others making that differentiation too! Some friends noted, and I agree that I am an artist-engineer, putting emphasis both on functions and looks, and was surprised to hear realize that there aren't that many people like that (as your diagram also implies). Good observation, I agree. Not only because a large group of people don't care about functions, but also it's much easier to get an impression by the looks than to understand the functions. Sounds interesting, though one tricky point about such a car chassis contest is that somehow it should be possible to demonstrate that a proper body/interior (of a given scale) can be put on each chassis, in order to avoid entries that pack a lot of functions by taking up a larger space but are kind of infeasible to complete into an actual car without blowing up the proportions in the end, and that's hard to see without actually making the body. Maybe something like a reference body could be provided to work with, but that sounds too restrictive. -
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
I'd really like to understand people's thought process on this one, because I just can't wrap my head around how ranking can be easier than scoring since for me ranking necessarily involves either a (fine grained) scoring or a sorting (pairwise comparing) of the entries I like, both of which are more complex than simple scoring. So to recap, my proposal is as follows: for each entry you like, give them 1, 2 or 3 points, depending on how much you like them. For the rest, don't say anything. How can ranking the first 6 be simpler / easier than that? How do you guys arrive to the best 6 and their ordering without at least deciding how much you like them? How do you compare them without that? Is it easier for you to do pairwise comparisons than to tell how much you like something on a 1 to 3 scale? Or maybe the problem people have with scoring is that they feel like they don't get to explicitly say who they think is 1st, 2nd and 3rd, etc? Is that what people prefer in the F1 scheme? I understand that it may give people some sort of satisfaction, but I believe that's irrelevant, because in the end it's the crowd that will decide anyway by averaging out the votes. My thinking is the following: in some contests, where I'm not so much into the theme, there are a few entries I like somewhat, but nothing special, I could give them a +1, maybe a +2, but that's it, and it is hard to pick the remaining ones in the first 6 (for example the space contest was like this for me). In other contests, such as the shrinking contest or the car transporter contest, there are lots of entries I really like, I could give many +3 and +2 scores, but I would have a hard time deciding which should be the 6 that I actually want to rank and which should be 1st, 2nd or 3rd. -
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
The difference I see is that when you give 20, 19, 18, ... points, then you essentially only get to decide on the ranking, but their relative scores are fixed, and linear. So if you think about 1st = 20 points as 100%, then 2nd = 19 points is 95% and 3rd = 18 points is 90% and you cannot change that. But what if you think 3rd is only 50% of 1st? You can't express that, and the decay is typically not linear. That's why the F1 scoring exists, it's a non-linear decay, so it's one step better in this respect, but even if that's used, you still don't get to decide the actual relative scores, only the ranking. Wow, cudos for doing that, that indeed sounds like a lot of work! That's exactly what I'd hope to eliminate the need for. I am not sure that such a voting system exists! :) Exactly, I don't think there is a need for a ranking from each individual, since we cannot accumulate rankings (meaningless) we can only accumulate scores. I think the F1 system is a good system for converting a ranking to scores, so that they can be accumulated, when the ranking is already given by the race itself. But if the ranking is not naturally given, why not provide the scores directly instead of going through the ranking to just get the scores? -
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
I think that is a fundamental difference between jury voting and public voting. The jury's 'job' is to consider each entry, so for them, ranking all of them is not an extra burden. But in case of public voting, I guess people might not want to consider all entries, only the ones they like, and just say nothing about the rest. -
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
As I noted, I would not put a min/max bound on total points or on number of entries. The whole idea is to make scoring entries independent of each other. The bounds introduce dependencies, and I think they are not necessary, they just complicate things. Or can you give a case that could potentially become problematic without the bounds? Exactly, that would be allowed. If you want to pick a single winner, then don't score multiple ones with 3, give them 1 or 2 :) In the end it's the whole community that picks the first place model with the accumulation of points, not the individuals. But I often have the problem the other way around: I can't pick a single winner, I see multiple ones as roughly equally good, and would like to express that. Maybe, but I also have the impression that people often have difficulty picking a single first/second/third place and having to decide in a strict way. But if that's a real problem, then we could allow half points, so that one could better differentiate between their favorite ones. That does not differentiate entries properly. Furthermore, if there are 30 entries and I like 10 of them, why do I have to rank all the other 20 that I don't like. That is pretty difficult. -
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
Thanks for the explanation. So do you have a rule of thumb when it's okay to allow for public voting and when to resort to jury voting? I was thinking something like when entries are more similar in theme/size then there's less chance for bias, but I guess that still does not rule out the bias with certain members/countries. What do you think about the point system I proposed above? -
I did notice this in the Arctic Cat as well, and I think this is really nice, they indeed flow much better when panels are put next to each other. I never understood why lego made their wing-shaped panels pointy, it always leaves a gap when they are connected.. Maybe they were designed to be used more in isolation originally?
-
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
It's nice to hear others' way of thinking about rules/voting :) Really, I do learn from it. I can accept that when voting is open to the community, everybody votes the way they want, no elaborate criteria are required, and that's intentional. There's only one thing left that I don't quite get: if the same applies to jury voting, then how is it different from public voting (except for being limited to a few people)? What is the purpose/advantage? I thought it is to better keep the spirit of the contest, but if the criteria are only vaguely defined and can be interpreted differently by each jury, then that can't be the difference/purpose. That's not the reason why the discussion started, rather (on my behalf) to better understand how the criteria should be interpreted (now I understand better: quite loosely, all right), and to make the voting/point distribution system more easy on the voters. As for the point distribution system, when distributing a fixed amount of points / among a fixed amount of entries, my issue with that is those numbers (points, number of entries) will always inherently be quite arbitrary, they may need to depend on the number of participants, furthermore, most importantly, the votes inherently depend on each other, so it is hard to consider each entry on its own (everything needs to be compared to everything). The reason I proposed the scoring system, is that the scores have an intuitive meaning. If you can give 1-10 points for something, then 1 means - I don't like it, and 10 means - I like it very much. That's simple to grasp. So now that I get that we don't want fleshed out criteria and weighting among them, my next thought was to simplify the scoring idea and just allow people to give a certain max amount of points to each entry, which is getting similar to what you guys are proposing here. Maybe even a 0-10 scale is too fine grained. I'd say max 3 points to each entry could be enough, that way a single person cannot bias an entry. It is kind of a generalization of number of 'likes' (to something like +1, +2, +3). And most importantly, I would not put a min/max bound on the number of entries to vote on or number of points to distribute in total. What do you think about that? I think it can't get any more simple than that, easy to interpret / execute. Do you see any dangers / disadvantages? What I did think of is whether it's a problem if somebody votes to a lot of entries (as could have easily been the case in the shrinking contest). I don't think so, because then all such entries benefit equally (no biasing happens). -
Hmm, interesting exercise, but what's the purpose of this? Is it just for the building challenge? Other than that, using 4 electric motors to drive a single axle in a convoluted way does not sound very effective. Maybe that's why nobody did this before..? I don't think metal joints and carbon axles are the solution here, but sorting out the math and programming first.
- 18 replies
-
- linear actuator
- technic
-
(and 5 more)
Tagged with:
-
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
That's exactly why I was lost in the beginning, because things were vaguely specified. I did not know based on what to select a model, and what to focus on. Should I go for more shrinkage at the cost of more basic implementation / dropping of functions, or should I go for less shrinkage and better representation of functions? Should I focus on functions or the looks (for example color match, which influences model choice)? Should I go for a function rich model, or should I pick what I like more even if it does not have so many functions. Does that put me into a disadvantage? After reading a few questions in the discussion topic I realized that the contest is kind of underspecified in this respect, and it is not going to get better specified, so I just let it go and picked the model I liked, knowing that it probably does not have much of a chance for getting to the podium even if I nail it. I did think about it as a build challenge, as someone proposed the wording, because even if it has voting and prizes, the criteria are somewhat vague so I can't really use them to guide my choices. Hmm, that does sound a bit weird to me. What I realize now is that I think differently about community voting and jury voting contest; in the latter case I sort of expect a more objective voting, which also requires more spelled out criteria. I personally can easily accept that in case of community voting, people would interpret the rules however they want, and in the end it's very subjective and the coolness factor has a big weight. In that case I would not have bothered asking the above questions, I would have known that it does not matter as everybody will interpret them however they want. However, in case of jury voting, I thought the point of the jury is to pronounce the spirit of the contest by sticking more to some well defined voting criteria. That's why it makes sense to ask, what exactly are those criteria (and how are they weighted)? As I explained in the previous comment, one benefit could be guidance in case of community voting; making it easier for people to make decisions by giving them a tool to actually rank entries, by spelling out more what the spirit of the contest means. Maybe we don't want that and just let people interpret the spirit however they want. Also it could be used as a tool for encouraging more informed community votes; if you have to score each entry/criteria, then you're more probable to actually look at them / think about them in more detail, I guess. At the same time, guidance for builders to aim their builds, especially in case of jury voting. Don't get me wrong, I am also okay with how it is now. I'll just continue thinking about them as building challenges, in which I might get lucky and please the crowds and end up getting a prize. I guess that's where we differ in our thinking. I don't find that useful, because it leaves little room for differentiation. Rather, I'd start from an average score, and increase that if some implementation is excellent as opposed to just being 'checked'. That way, I'd probably only end up giving near perfect score to very few entries. -
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
I explained that above. I believe if done right, there would be only a small chance of getting the same score for podium entries. For the rest, it does not matter. Exactly :) That's one key point. You have to realize, that even if you don't specify the weighting explicitly, implicitly there is always a weighting. Either because everything is weighted the same, or because while ranking, everybody does the weighting in their head implicitly (with different weights, which is not ideal). I believe it would be helpful to specify that explicitly, to better understand what to focus on. One simple way of weighting in the above scoring scheme would be to say for example 'functions' can get max 10 points, while 'looks' can get max 5 points. That's a 2:1 weighting in favour of functions, for example. Easy to grasp and score. I see it the other way round. Many people say it is hard for them to rank entries or to choose only 6 best ones. With such a scoring, it would simplify. For one, we'd have to focus on only 1 entry at a time, as scores would become independent of other entries. Second, with some information on max scores for each criteria and the rough meaning of score values (as I explained in my previous comment), people would have guidance for assigning scores, so it would become easier than straight off ranking. I have also seen people report that when they need to rank entries, what they do beforehand is they score them according to their own scoring system.. which sounds understandable, otherwise how can you rank so many entries? Could be better if everybody did so according to a unified scoring system instead of their own. Even if there is a second pass, ranking 3 entries is much easier than ranking 30. Sure, I guess there's always going to be further debates, most of which could be cut off as you wish in order not to get too complicated. For example I would not go too far with the weighting of criteria, just a few max-score categories. I don't see this scheme as complicated, it's just like an exam or school competition. You get points for different subtasks and then sum them up to get a final score. Sure the teacher / jury needs to weigh the different subtasks in the beginning (a little more preparation required by them), but it does not seem too difficult to me. Anyway, just wanted to share my idea of how I would do it because this sounds more intuitive to me. -
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
Of course that is quite possible, but it is easy to solve: after scoring, take all the entries that are candidates for the podium and do another (community) voting round just on those. For example if there are 2 of the highest scores and 3 of the second highest, then all that needs to be decided is which one is first and second out of the two highest, and which one is third out of the 3 second highest. The second round could be done with simple ranking. Also, I think that if many entries would come out as 10/10 on many criteria then there'd be something biased about scoring. For example, our criteria are often related to functions, but just because something works somehow, it does not mean a 10/10 for that function. It's much more shaded than that, things such as reliability, ease of use, solidity, etc should be taken into account. In fact, what I'd expect with a sound scoring is that scores should have a normal (Gaussian) distribution; most entries should score around average, few should be highly above average (exceptional), and also few should be much below average. Then, equal scores would be more probable among average entries, which does not matter for the podium places, and podium places would be distinct with higher probability. That could actually be turned into a scoring guideline: if something works okay, but nothing special, give it around average score, and only give it high score if it is exceptional. -
Lets "fix" powered up!
gyenesvi replied to allanp's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
The instructions of the sets themselves at lego.com? I know that's not a simple to review explicit list, but the info you need is there. If there'd be a place where we could gather this info I could add my knowledge of the sets that I own. Not sure what's a good place for this kind of info. Maybe @kbalage could help with allocating a page for this info on his Powered Up page? -
Generic Contest Discussion
gyenesvi replied to Jim's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
I think that had more to do with generic interest in the topic of the contest than with engagement. With this contest, interest seemed much higher, so I'd expect voting participation would have also been higher. But I don't mind jury voting or 50/50 voting actually. I do believe that the jury can be better at enforcing the spirit of the contest and avoiding "bigger is cooler" voting. However, I do agree with a previous comment, that it could be both easier / better / more transparent to score each entry individually, and then derive the ranking from the individual scores, instead of directly ranking them against each other. For example each entry could get a score on a 1 to 10 scale for each criteria, according to how well it satisfies each criteria associated with the spirit of the contest, and the scores per criteria would be summed up to arrive to a final score for a model. Has this been ever tried? -
Congratulations to the winners! I actually anticipated both the Arocs of @2GodBDGlory and the Extreme Adventure of @Zerobricks to have a high chance of ending up in the top 3! I really enjoyed following and participating in this contest, was really impressed by both the amount and quality of the entries. Thanks for the kind words, and while I am really satisfied with my model, as I wrote in my discussion thread, starting from the model choice I did not expect it to score high as it did not have that many complicated functions. But anyway I accepted this and I wanted to build this one :)
-
Thanks a lot for testing this, that sounds pretty promising! Did you test it built into something simple to get closer to how it would begave in real conditions? I am quite confident this would not be a problem. As I wrote in my previous post, simple valves (which are present in hand pumps) could be installed in each circuit, and then one pump would be needed for pressurizing all circuits. If such a pump would be built into each model, then it would at worst mean giving it a few pumpings before a play session. Still much better than pumping through the whole play. Did you experience leaking during pkaying with it? Did you need to re-pressurize it? Looking forward to the video! I wonder how the valve is implemented inside the hand pump, and whether a separate peristaltic pump piece could be printed including outlets for attaching hoses (whether 3d printing is precise enough for that, sounds a bit ambitious)..
-
Lets "fix" powered up!
gyenesvi replied to allanp's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
The gamepad controller could be theoretically added to the Pybricks solution as well, and it would need a smart device only for configuration. It is not implemented yet though. -
Lets "fix" powered up!
gyenesvi replied to allanp's topic in LEGO Technic, Mindstorms, Model Team and Scale Modeling
Oh, okay. But that's still just one sensor and does not need multiplexing. The booster could accept more than one sensors (up to 4?), and the data of those needs to be differentiated. But if it's a special port on a hypothetically new system that could probably be solved with some special communication protocol between the booster and the main hub :)