Robotics Project: Snuggles, The Wolf

There's a lot of separate things and a lot of words in my responses :oops: so I split them up in to more than one post below:


There's really no need to use modeling for the position of limbs. I guess I left out that neural networks are incredibly slow when compared to old school methods, like formulas and equations. Even if the hardware were able to keep up, the video card in my desktop uses up to 350 watts of electricity to do what it does -- which is more than half the average watts of my ebike (500).
Ok. I don't yet know how to do this stuff, or which things are good for what. So whatever alternatives you (or anyone else) can think of for processing this data, I'm up for trying out. :)

Granted, you could have a pretty big battery in a St. Bernard sized body, but it wouldn't last much longer than a short nap I don't think.

Well, the computer(s) won't be in the robot itself (they'd probalby catch fire inside the heat-insulating furry body!***), and since this version of the project is essentially for a relatively stationary robot (imagine a big old tired lazy dog that just lays on the bed and waits for you to get home, then snuggles, plays, etc., all just right there in one place or limited area, it doens't need to be battery powered either.

Since I would like all the noisy motors and stuff outside the robot wherever possible, I'm already going to have a (probably large) umbilical of push/pull cables from skeleton/etc out to the actuators, and so power for any of the internal electronics (sensors, first-level MCU processing, speakers, etc). would come via a cable within that bundle, too.



Later I'd love to develop something that could actually follow me around or even be portable and "self powered", but I dont' actually need that. :)


***In the 90s I worked on desiging a wolf-shaped computer case, but I couldn't come up with anything other than liquid cooling that passed the refrigeration tubes out to something large and external that would let the computer inside function for more than a few minutes; IIRC it was a dual-P90 with WinNT when I started, I think I'd moved to Win2000 and some newer CPU/etc by the time I concluded it just wouldn't work without the external stuff that would be about as big and noisy as a regular computer tower, and dropped the idea, since all this was for my music-recording home-studio stuff, and I already could easily make a regular computer case nearly silent.
 
Possibly. Off hand I'm not really sure, to be honest; I don't think I'm following you very well. It doesn't have to walk though, so I still think your best bet is just using simple formulas and not deep learning. I'm talking about controlling the servos, e.g. to explicitly position a limb in an arbitrary position or something; deciding where to position the limb is another matter altogether.
I might have to make a 3D model of the idea, and animate it, with narration, to explain it. :( I can clearly see this in my head...maybe if I open it iup and take pictures? :lol: :oops:



Before I can even get to the point of creating a command to control the servos, I have to know where the interactions are. Figuring that out is not always as easy as "hey sensor 1 on paw 1 got a certain amount of input"...some of that input is on that paw, and some of it might also be detected in other paws.

To know where those are, I have to know where the parts are, relative to each other, and knowing that I can map that out in a 3D space.

Then all the vibration/etc sensory input, which is going to have inputs from multiple sensors on different portions of a body part, or even separate body parts, detecting just one interaction, so the system has to figure out where the center of that came from, like an earthquake epicenter, etc. can be placed on that map since the machine already knows where each sensor is if it knows where the body part is that it's attached to.

When the location(s) of interaction has been determined, then the proper response can be determined.

There are simplistic ways of just ignoring a bunch of data and assuming a response comes only from one spot, etc., but this will result in incorrect responses in various instances where those aren't the case.


Maybe there's some less complicated way of doing all this but I've been trying to find one for a long time and it just doesn't appear to exist. :(

The only thing I have found that could be used to localize touches over a body is IMUs (or accelerometers) because they can detect very small vibrations, *and* they can tell which direction they're in...but since they can't tell *where* they are, data from multiple sensors have to be integrated togehter to figure that out. (there are probably other ways than a mapping function to do that...but the mapping function just seems like the best way since you get a lot more out of it than just the simple touch data, and it could then be used to do other things with).


It can't use pressure sensors to determine touches, pets, etc., because it's almost all covered in fur. A touch on that won't register on any such sensors that are under the fur, you'd have to mash it hard enough to push the fur all the way down and be mashing the "skin" instead, and that's not that much of how people interact with dogs (or other fuzzy beings).

Even if it did use pressure sensors for this, it would require them over the entire "skin" of the body, and I don't know of anything that could be built that way--there's resistive cloth that can do some forms of touch detection, but AFAICT it wouldn't work under the fur very well, if at all.

Capacitive touch sensors can't be close enough together to do this, and it would require far too many of them (hundreds or more), and none of the ones I coudl afford that many of can do anything more than on/off detection, *and* they mostly don't work beyond about 1cm in open air and (far) less thru materials.

In theory a theremin-like radio-antenna system could work thru the fur and detect analog signals and work over a relatively large surface area...but not in such close proximity to each other, without some fairly robust radio-engineering I'm not capable of to prevent interference with each other.

Radar sensors that I coudl afford only do on/off detection and only more or less one per room, as they also interfere with each other. Not remotely useful for smallscale touch detection.

If you (or anyone else) knows of sensor systems (that I coudl actually afford on a nonexistent budget :lol: ) that could detect light touches anywhere on a body with only a "few" (probably at least a couple of dozen) of them, and still be used to localize the interactions to specific spots on the body (not just at the sensors), I'd love to know about them so I can investigate. :)
 
I'm not sure what issues you're currently trying to solve? Maybe I'm just not following you at all, but here's how I see it:

1. Actuators generally report their position, don't they? So I'm not sure what problem needs to be addressed there?
Yes, and no. Depends on the actuator you use and whether it is one that has to be direclty monitored by the software.

Servos (probably most of what I'd be using for actuators) have a built in feedback to their internal driving electronics. You just feed them a PWM signal at a specific duty cycle (and frequency) and the move to the angle you told them to. (there are 360 degree servos that don't have the feedback so you have to build your own position sensor into the mechanism to tell where it's at, and code for that)).

So...you know where you told them to be, but you don't know where they actually are. If there is sufficient resistance against them, they can't move ot where you told them to be, so they'll be somewhere between that and their "off" point. To know where the controlled object is actually located, you have to have a sensor in the object itself that can tell you that--that's part of what the IMUs would do.

All position sensors are relative anyway, whenever they are attached to other moving parts. They only tell you where a part is from it's "zero" position. They don't tell you where it is in relation to it's environment, or to some other part that it's attached to that may have moved or even be in motion.

For instance, just a plain simplistic "robot arm" that's fixed to a tabletop, and has a 3-axis shoulder, one-axis elbow, two-axis wrist, and one-axis two-finger-gripper-hand. The hand sensors know if it's open or closed, but it doesn't know if it's pointing up or down or left or right,--that's up to the wrist. The wrist knows it's up, down, left, or right, but it doesn't know if it's at an angle; that's up to the elbow. The elbow knows what angle it's at, but not which direction it's pointing; that's up to the shoulder. The shoulder can move in all three axes to point the entire assembly in any direction above the table (so 180 degrees of freedom for two axes, and 360 for the last).

So the data of each sensor set further down the line is dependent on the data of each previous sensor data.

A robot that is not fixed in a particular spot relative to it's environment, and has many more degrees of freedom, has a whole lot more dependencies between the sensor data sets to figure out where the heck any particular part of the robot actually is, and whether it is likely to interact with some portion of itself or it's environment.

That's where the wolfy-bot is at. It's not as bad a problem as one that has to walk around...but it's still pretty complicated.



EDIT (hopefully this is comprehensible, I woke from a(nother) "processing dream" about this (I have lots of these, in between the emotionally-driven nightmares about stressful events past and present that are much of the reason for needing this project to succeed):

I forgot to also state (in this reply; I think it's covered in ohter posts before, though) that in the wolfy, it's likely that none of the actuators will be inside the actual robot, but will be outside in a box to mute their sounds (etc), and will cable-operate their components. So there will be slop in the response that the actualtor's direct sensing can't know about. To get correct info it needs to know where the actual components being driven are, and for that it needs data from the IMUs attached to the parts.

This also lets it know if something has broken between the actuators and the components they drive, as the component wouldnt' be moving as the actuator is, and the difference in response from expectation would help it know that and allow a response to this to alert the user.

In addition (as also I think covered previously), the user interactions are going to include the user moving the robot in whole or in part, and it has to "know" this is happening to "cooperate" with it for the most part, or to "resist" it if playfulness responses indicate it should for that situation, etc. The IMU data would show what's actually happening to the robot's components to let it do this.


And I probably wasn't clear before on the human-readable data output the map would generate. Having such a visible 3d model (regardless of primitivity of actual display) would make testing and troubleshooting of the sensory system itself, which is going to be necessarily complex in operation. It would allow visualization of the touches, to see that it is correctly detecting where the touches are and their intensities and durations, so that localization and interpretation of the data could be corrected if it isn't, during initial development.

Beyond ID it would be a useful testing tool to be sure sensors are operating as they were during training, dev, etc. since it would be visually obvious that they've changed outputs without sorting thru a bunch of signals, data, and code, or disassembling hardware (once assembled, it is not easy to get inside for testing, so it would be very good to have a visual representation of where a problem probably lies to then check the data sources to then know where to open it up).

Iv'e done lots of hardware troubleshooting over my life, and a visual tool like this would be invaluable for these purposes, totally aside from the 3d map's usefulness in localizing the touches for more correct behavioral responses.
 
Last edited:
2. Detecting interaction with the environment is a tricky problem, as is processing that data and deciding what position the actuators should be put in. It sounds like you're trying to have it figure everything out every time it gets turned on. I think a much simpler, much more effective way of doing it is to impose certain constraints on the entire apparatus -- as in make assumptions about the design. A pressure sensor on the right foot will always send a signal that is from the right foot, and the place of attachment of that right limb can be assumed to always be to the right of the left foot. So there's no reason to detect whether or not the right foot is to the left of the left foot, or to the right of it (I assume the limbs are going to be too simple to have that much range of motion).

I guess I dont' really know how to explain it, because it isn't designed the way you're describing. It doesn't have pressure sensors, it's using the IMUs (acclerometer/gyros). As noted in the part of the reply just above this, data can come into multiple sensors and you have to know where they are relative to each other to know what the data means / where it came from, in some cases. (not for all cases...it is about how a dog senses and reacts to interactions (for this discussion, it's specifically about touch). If necessary I can try to write a scenario list of interactions and responses; that's what I expected to have to do to create if/then/etc lists for the program to interpret the data, but it is going to take a long while to do).







I'm not sure what you need all these sensors for, and if you had all of this data, I'm not sure what you'd want to do with it.
Hopefully the explanations above will help with that (not sure since I thought I'd basically explained it earlier in the thread, but I'll have to re-read what I've already posted to see if I actually did that properly).

If it doesn't help, I'll try to come up with better explanations and descriptions somehow.
 
Robotics isn't my usual area, but I can at least estimate whether or not something would work well when applying machine learning to it -- and I honestly can't say whether or not all of that data could be processed with neural networks. I'm having too much difficulty trying to evaluate the setup. I'd need more familiarity with the components and the output they produce, etc., as well as some concrete information about what output would be required from the machine learning part of the system.

Well, if NNs aren't the thing to process the data with, that's ok. It just sounded like they might make it easier based on the little I've learned so far about them, and the things you'd said about using it to process the other project's data you were working on.

The output of IMUs is, at essentials, XYZ axis data for gyroscope and accelerometer, for each sensor board.

Generally the data is in the numeric format of X.XX Y.YY Z.ZZ. (it might have more decimal places depending on sensitivity).

Let's say you're only reading the AM data and the board is sitting there not moving with no vibrations or anything, and you "calibrate" it so that is the zero position, you get zeros for all those 0.00 0.00 0.00. If you pick the board up and it stays perfectly level and not rotated at all, the vertical axis AM data will spike and keep changing until it stops moving and you then get zero again. If you're rotating it, then the AM data for all axes in rotation change to whatever acceleration that causes until you s top moving it, then you get zero again.

The gyro data will be zeros at the calibrated zero position (usually done at powerup to give you a known reference), then if you tilt the board the axes being tilted tell you the angle it's tilted at (depending on sensor that might be in degrees, or some other number you have to convert). If you tilt it only around the Z axis till that is up and down instead of horizontal, then you'd get 0.00 0.00 0.90 if it's in degrees. Etc. It stays at that reading until you move it back.



For the specific boards I happen to have, it can get more complicated if you use the onboard MPU to process things but I can't find much info on how to use that to do anything; the chip datasheets/etc don't include that, just the registers used to pass data back and forth--not what the data is, or what it tells the MPU to do...so I'm ignoring the MPU entirely since the raw gyro and AM data is fine.
 
Since I would like all the noisy motors and stuff outside the robot wherever possible, I'm already going to have a (probably large) umbilical of push/pull cables from skeleton/etc out to the actuators, and so power for any of the internal electronics (sensors, first-level MCU processing, speakers, etc). would come via a cable within that bundle, too.
Okay, well that changes things pretty radically then: in that case, sure, you can go hog wild with the computational requirements (so long as you don't mind the possibility of being electrocuted in your sleep because you drooled on the AC plug, lol).

There's a whole lot here to digest, and I'll do my best to get to it all. I've been working 60+ hour weeks again for the past couple of months, so please don't take offense if I seem slow to respond. This is one of three links I try to check daily, so I'll get to it 🤗
 
Yes, and no. Depends on the actuator you use and whether it is one that has to be direclty monitored by the software.

Servos (probably most of what I'd be using for actuators) have a built in feedback to their internal driving electronics. You just feed them a PWM signal at a specific duty cycle (and frequency) and the move to the angle you told them to. (there are 360 degree servos that don't have the feedback so you have to build your own position sensor into the mechanism to tell where it's at, and code for that)).

So...you know where you told them to be, but you don't know where they actually are. If there is sufficient resistance against them, they can't move ot where you told them to be, so they'll be somewhere between that and their "off" point. To know where the controlled object is actually located, you have to have a sensor in the object itself that can tell you that--that's part of what the IMUs would do.
You mean it's not possible to monitor the current and use simple formulas to judge whether or not it's having difficulty, or at a midpoint or something?

... I knew I should have taken at least an intro to electrical engineering course in school. I'm speechless.
 
Hopefully the explanations above will help with that (not sure since I thought I'd basically explained it earlier in the thread, but I'll have to re-read what I've already posted to see if I actually did that properly).

If it doesn't help, I'll try to come up with better explanations and descriptions somehow.
I think our problem is primarily that all of your experience is in stuff that I have pretty much none in, and vice versa; definitely not your fault, I hope I didn't sound like I was implying that or anything.

Basically I need to "see" the raw data, or at least be able to imagine the 0s and 1s that would be fed into an artificial neural network, to have any more informed comments to add.
 
Generally the data is in the numeric format of X.XX Y.YY Z.ZZ. (it might have more decimal places depending on sensitivity).

Let's say you're only reading the AM data and the board is sitting there not moving with no vibrations or anything, and you "calibrate" it so that is the zero position, you get zeros for all those 0.00 0.00 0.00. If you pick the board up and it stays perfectly level and not rotated at all, the vertical axis AM data will spike and keep changing until it stops moving and you then get zero again. If you're rotating it, then the AM data for all axes in rotation change to whatever acceleration that causes until you s top moving it, then you get zero again.

The gyro data will be zeros at the calibrated zero position (usually done at powerup to give you a known reference), then if you tilt the board the axes being tilted tell you the angle it's tilted at (depending on sensor that might be in degrees, or some other number you have to convert). If you tilt it only around the Z axis till that is up and down instead of horizontal, then you'd get 0.00 0.00 0.90 if it's in degrees. Etc. It stays at that reading until you move it back.
Here's where I'm currently tripping over having more to say: I don't know of a good way to process a live, constant data stream like that. It's not something I've read much of any research about. Usually these things just sit there and do nothing until you come along and prompt them to do something by feeding a discrete chunk of data to them. A continuous stream of data is a little different, and I'm not having much luck coming up with a practical way of handling it.

A naive approach (which is often surprisingly effective, don't get me wrong) would be to just buffer the data for periods of time -- say 5 seconds at a time or something -- and then process that 5 second chunk before repeating the process over and over again. I wanted to have something more interesting to say than that, but it hasn't come to me yet.

Well... I guess, actually, you could do it with a standard LSTM (long short-term memory) model. It would just mean feeding timesteps to it piece by piece. Yeah, nevermind, I'm just being stupid -- that would work fine. Could even have some convolutions for good measure if you wanted to try them out.

In my experience, designing an architecture to suit a problem is at least 50% trial and error. It's extremely difficult to know what would work best for a given application. It would be great if there were one model that was best for a big range of problems, but that's just not how it ends up working out in practice. You don't have to take my word for it, you can just go to paperswithcode.com and see the new architecture that's great for this-or-that on a daily basis. There's usually some credible sounding theory underlying them, but if you ask me, it really is just guessing and trying random stuff half the time to see what works best.
 
Last edited:
Okay, well that changes things pretty radically then: in that case, sure, you can go hog wild with the computational requirements (so long as you don't mind the possibility of being electrocuted in your sleep because you drooled on the AC plug, lol).
Not much chance of that; the power in would all be low-current 5VDC stuff (maybe 12V to power the speaker better, and use that to convert to 5v/etc internally, but unlikely to be necessary: at present I'm successfully using a cheap 5v amplifier PAM8403
1705552578811.png
and sound-playback board DY-SV5W
1705552488751.png
with a simple switch on some of the input pins to either snore or slowly quietly breathe as if sleeping, or pant quietly. I rarely use the last, but sometimes it's comforting when I'm not trying to sleep. The volume knob lets me quieten it down if necessary (have ot feel under the fur by the shoulder for the switch and the knob, as it's all mounted inside an old computer speaker that is secured inside a foamblock "ribcage" that itself is secured to the "spine"), which has the "bass" speaker inside it, and a "treble" speaker that is much smaller inside the head. The sound files were edited and EQ'd to distribute the sounds more realistically between those so they more or less sound like they should, as if they were coming from the correct parts of a dog's lungs and throat and nose. There's a USB extension cable glued into the board's port that I can pull out of the fur to reprogram the sounds for experiments, but I haven't done that in a while.


There's a whole lot here to digest, and I'll do my best to get to it all. I've been working 60+ hour weeks again for the past couple of months, so please don't take offense if I seem slow to respond. This is one of three links I try to check daily, so I'll get to it 🤗
If you think it's a lot to digest now, wait till I have eventually described everything I would like to happen, and how... ;)

But most of the rest depends on how the in-development parts work out.

I'm in "no hurry" to get this all done; it happens as fast as it happens. Ideally, it'd all be really really fast, but I couldn't do it and wouldn't expect anyone else, especially volunteer help, to either. :)

I am hoping at some point things will just start coming together, like they usually do on other projects, but I don't think this is anywhere near that point--even the specifications/explanations haven't got there yet. :lol: :oops:

(I should probably really write a complete spec document...I started to, but as usual got distracted by actually doing things).
 
Last edited:
You mean it's not possible to monitor the current and use simple formulas to judge whether or not it's having difficulty, or at a midpoint or something?
Yes, and that has also been mentioned in some of the previous posts in the thread for ensuring it doesn't overload something, and can shutdown. But other things can cause current spikes, and the measurement of current doesn't tell you that something else is moving the actuated parts in some way that the actuators themselves are not (especially if they are not presently active).

So...that's one reason to use the IMU data to figure out where all the parts are and how they're moving and how fast and in what direction, etc., and keep track of that.

Also, (I think I posted this before, not sure; there is SO MUCH STUFF in my head about this project) : knowing the track of the motions, and such, it can be used to "teach" specific motions and behaviors, with some form of "direct motion capture" with this data as well. Faster than by-guess-and-by-gosh methods--let's say I wanted to teach it to lift a paw for a "high five": I could just grab the paw and move it and the rest of the limb in teh way and direction and speed I want the result to be. It could be somethign that requires a change of modes to do, so it's not "operational" during these times, but instead is just recording sensor data. Or it could be a "hey fido, lemme teach you something" command, etc. I have a number of possible ways it could work.

I have an entire system concept in my head for training motions this way, and editing them in a computer-based GUI before saving them to the robot's motion-memory-library, but I haven't posted that yet. I'll do that at some point, but right now it'd just get confusing with all this other stuff being discussed. By itself, it's pretty complicated...but would allow "anyone" to teach something specific to it in an easier way than teaching an actual dog to do it.



... I knew I should have taken at least an intro to electrical engineering course in school. I'm speechless.
What makes you speechless?


FWIW, I've never had any engineering courses (there's way too much math; I just couldn't follow it)...I learned some electrical and electronics stuff from a distant neighbor (who I only met because I crashed my bicycle down a gravel hill near their house) down the road in rural Texas as a kid, as he taught me about ham radio (I was KA5TWP but haven't done that stuff for decades); that was all vacuum-tube stuff though. Not long after, I had the short "technician" course at DeepFry...er...DeVry back in the late 80s, which taught me more modern electronics component-level troubleshooting and repair skills, but not how to design the stuff or much beyond the very very basic idea of how it works (just what was needed to understand why something might not be working).

So anything I happen to know i learned the hard way by building and breaking things, and having to figure out how to fix the things others didn't want so I could have one of whatever too-expensive-to-buy thing it was. :) Often enough, it's out of my skillset to deal with...and sometimes I can add enough to my skillset to do it, and sometimes I can't.

Same for mechanical stuff--no mech engineering; just mostly hacking existing structures/mechanisms/etc to misuse them in a way that does what I want...and reworking them as needed when my usage is outside the limitations of the existing stuff.


I know...well, more than enough to be less dangerous to myself and others than I was when I started out. :lol: And just enough to accomplish many of the things I set out to do in various projects...though they usually require multiple stages of refinements to end up doing what I really want them to. Sometimes I find out that what I *actually* wanted it to do is different from what I *thought* I wanted it to do, *after* I build it. :oops: Sometimes that's adaptable, and sometimes it's not.


Some things I just "do", like playing/creating music, or sketching/drawing, "kitbash" model building, sculpting, etc; just come to me, and "organically come together". But things like this project require considerably more actual work to achieve, and this is very difficult for me to do; I have a hard time pushing myself to do things that require actual sustained hard (mental) work (physically-exhausting hard work I do every day; I actually enjoy the doing much more than the designing...this is very hard to describe, and isn't accurate....so I'll stop).
 
Here's where I'm currently tripping over having more to say: I don't know of a good way to process a live, constant data stream like that. It's not something I've read much of any research about. Usually these things just sit there and do nothing until you come along and prompt them to do something by feeding a discrete chunk of data to them. A continuous stream of data is a little different, and I'm not having much luck coming up with a practical way of handling it.

A naive approach (which is often surprisingly effective, don't get me wrong) would be to just buffer the data for periods of time -- say 5 seconds at a time or something -- and then process that 5 second chunk before repeating the process over and over again. I wanted to have something more interesting to say than that, but it hasn't come to me yet.

Well... I guess, actually, you could do it with a standard LSTM (long short-term memory) model. It would just mean feeding timesteps to it piece by piece. Yeah, nevermind, I'm just being stupid -- that would work fine. Could even have some convolutions for good measure if you wanted to try them out.

I don't know enough about these things yet, so I don't undertstand why they would operate on a stream of data frames any differently than on sequentially-input single frames of the same data? I'm sure there's a reason....

Meaning...the data would be captured for one "frame" of movement (like a frame of video) for all the sensors. Then process that data to the "map". Then capture another frame, process it and alter the map (and store the previous map for later comparison, so changes over time can be looked at even if it's only a manual troubleshooting process, like playback of "WTF happened when we did *this*?" :lol: ).

(assuming we're still talking about data capture/processing of the IMU data for the map thing--if it's about something else, I missed it).


In my experience, designing an architecture to suit a problem is at least 50% trial and error. It's extremely difficult to know what would work best for a given application. It would be great if there were one model that was best for a big range of problems, but that's just not how it ends up working out in practice. You don't have to take my word for it, you can just go to paperswithcode.com and see the new architecture that's great for this-or-that on a daily basis. There's usually some credible sounding theory underlying them, but if you ask me, it really is just guessing and trying random stuff half the time to see what works best.
I'm very familar with this approach. :lol: In all sorts of things I do (even my "art") I usually have some idea of what to do, but not how to do it, so I end up doing this sort of thing....

Sometiems I have to design an approach to a troubleshooting problem. Meaning, there'sa problem I've never encoutnered, but it has elements I recognize, and so I have ot come up with a plan to test things to find the cause--but not knowing what the cause might be it's hard to do that, so I have to guess which things to test first without wasting too much time, or potentially making the problem worse, or creating new ones that will complicate the whole process. :/
 
I think our problem is primarily that all of your experience is in stuff that I have pretty much none in, and vice versa; definitely not your fault, I hope I didn't sound like I was implying that or anything.
Oh, don't worry--almost no one seems to really understand whatever it is I'm talking about. It is very hard for me to communicate the things in my head correctly, completely, but concisely, because there is so very very much going on in there all the time, all at the same time, and I have to pick out the bits that are related to the thing in question and sort them from all the other unrelated stuff, then condense it into words other people will understand***, and also not leave out stuff that is still in my head that didn't make it to the page yet...but that I already imagined writing...(I often have a very hard time distinguishing between the things I imagine and the things actually out there in front of me, because the imagining is extremely detailed and complete down to sensations...I can't even describe the problem well...****).

***this is particularly hard because few people care to "listen" to my "ramblings"; there is just too much information for them to process, but I have to include all of the details to get things across, or else they can't possibly understand what I mean, not really (I've tested this on various occasions), and generally they don't care enough about anything I have to say, not enough interest in whatever it is I'm interested in, to even attempt to absorb enough of the information for me to really communicate with them. I'm sure this has a teensy tiny bit to do with my exclusion from virtually all social circles.... ;) It isn't that I'm any "smarter" than anyone else (I'm frequently quite a lot dumber despite what "IQ tests" say)...just that what I'm thinking is often so different from what they are that the common bits just arent' enough for reliable communication. (and I don't do realtime comms well because it takes me way too long to sort out all the stuff in my head to get the bits out I need to get across).

I try really hard to "be normal" whenever I'm just dealing with all the usual stuff between me and other people, but when it comes to things in my head to get them out to the world, especially when they're as complex as this project.....

**** I had added those stars to point to this explanation-continuation, but it got lost while I was writing the first (***) branch out. :/ So I am not certain how I intended to continue--all the bits and pieces of it are mixed up with all the other stuff again in my head.


Basically I need to "see" the raw data, or at least be able to imagine the 0s and 1s that would be fed into an artificial neural network, to have any more informed comments to add.
Well, I can post a stream of IMU data from a sensor in a given situation, if that would help. (have to figure out how to make it log that data, but I'm sure there's code out there for this already).

I'm not sure if that's what you're talking about or not. If you mean the whole data stream from the IMU network...that is likely to be at least a little while before I get that built, since I have to physically construct a skeleton to mount at least a dozen of them on first, mount them, wire them, then get the code working to read them all and write that data to some sort of organized file. If I didn't have to have the dayjob to survive, I could do the physical bits in a couple of weeks or so (depending on how fast the 3D printer can manufacture the skeleton parts). But the code stuff...I dont' even know enough to give an estimate on that part yet. :oops:
 
I don't know enough about these things yet, so I don't undertstand why they would operate on a stream of data frames any differently than on sequentially-input single frames of the same data? I'm sure there's a reason....

Meaning...the data would be captured for one "frame" of movement (like a frame of video) for all the sensors. Then process that data to the "map". Then capture another frame, process it and alter the map (and store the previous map for later comparison, so changes over time can be looked at even if it's only a manual troubleshooting process, like playback of "WTF happened when we did *this*?" :lol: ).

(assuming we're still talking about data capture/processing of the IMU data for the map thing--if it's about something else, I missed it).
Look at it this way: how many updates per second is the hardware going to be sending out? Now compare that with my glazing over the process of using convolutions on a picture (where the matrix gets "rolled" over the pixel data -- which is a bunch of matrixes all stacked together itself -- one by one and "multiplied" by the matrix of the pixels in that square region of the image [multiplying the two matrices is my analogy for what a convolution really is doing]). It's not operating on 30 video frames per second, it's operating on one single bitmap of size 3x256x256 (3 channels [red-green-blue usually], each of which is a 256x256 grid/matrix/"bitmap" where the position inside those 256x256 (height x width or width x height) matrices indicate pixel position inside the image). If you want to use convolutions on a stream of video data, like a 30fps .avi file or something, usually people just increase the convolution operation to include an extra dimension (the time dimension). That's one of the reasons they're so convenient, is that they can operate on multiple dimensions (channels are just a dimension, abstractly) simultaneously. But then you're still just dealing with one, discrete chunk of data (the entire .avi file with [30 * DURATION_IN_SECONDS] x 3 x 256 x 256 values, or a 4 dimensional array/vector if viewed as 2D pixel maps with an added color axis/dimension, then with an added time axis/dimension).

So what I'm saying is that it isn't immediately obvious to me what a good way of turning a constant stream of input data into discrete chunks for processing is, in these circumstances. At some point there has to be an arbitrary buffer collection value, where processing is delayed while input values accrue and are collected into a chunk for processing. Everything kind of depends on how that's done, and how long of a delay is used. With something like time series forecasting (the project I mentioned previously about forecasting the available supply of water), you can choose to use hourly, daily, or even minute chunks of time. But it would take experimentation and a more concrete design (or real-world examples of the input data) for me to know where to begin, is pretty much what I'm getting at.

Remember, the entire duration that input is buffering and being collected into a chunk is time that there's literally nothing happening for. So even 5 seconds at a time would have a dramatic influence on reactivity.
 
Last edited:
Oh...ok. I didn't get that it was only working on totally static data.

I'll have to think about this because there is probably some solution, once I know more about how stuff like this works (but I have a long way to go on that).

As far as how much data there will actually be, I don't know. Minimum of 3 axes x 2 sensor types for each IMU, x however many sensors there are (best guess is a couple of dozen, minimum of a bit more than half that, potentially a few dozen if they're not as sensitive as I think they are). Don't know how often they have to be sampled to give reliable position and vector and velocity data--they can stream pretty quickly, from the demo code sketches I've played with so far.

The same data stream also includes the vibration/touch data embedded within the same channels, so a separate computing process would be used to find non-noise signals in it and then localize those to the specific sensor groups they came from, compare signal strengths with that and the map data generated by the first computing process above, to know where "exactly" the signal came from, so that plus the type of signal, over time, can be used to figure out what the interaction causing the signal actually is.

Maybe each of these things would all be separate computing processes running in delayed-parallel (staggered so the necessary data from one is available for the next), possibly on separate MCUs with separate code?

ATM I'm still designing "in a vacuum" since I don't yet know enough about the physical hardware capabilities or the software capabilities...I only know what I want to happen, but still have to learn much more to *make* that happen. :oops:
 
Oh...ok. I didn't get that it was only working on totally static data.

I'll have to think about this because there is probably some solution, once I know more about how stuff like this works (but I have a long way to go on that).
Just because I personally can't think of a good way of doing it without sitting down to actually do it, doesn't mean that it can't be done or that it's too hard to do. I'm not a super genius or anything -- just your regular next door kind, hah. I hope I didn't discourage you 🤗
 
Well, as you said, you'll need data to work with, so other than posting ideas as they come to me and I work them out, I'm trying to focus on building sufficient hardware to get some of that data. Since I will need a skeleton anyway, I'm figuring out how to 3D print the pieces so they'll go together and stay that way without breaking, and pass the cabling along them in some funcitonal way. The cabling will at least for now just be bicycle brake/shifter cables/housings, since I have those, pulled by the servos I have.

But all I need for the data collection is the functional skeleton, to be moved by hand, with the IMUs mounted to the bones, and wired up to one or more MCUs to collect and store the raw data.

Once I have that I can stick that in a file along with the motions used to create the data, the data format, code that generated it, and the hardware design, here in the thread, and then you can see if it is usable to interpret.

Does that sound like a useful plan?


FWIW, other than the (lots of) very specific but miscellaneous bits of knowledge I've collected and blended together improperly in my head, I'm no genius either--at best average, and more often retarded about many things, with wacky ideas that almost never work out (certainly not as planned). ;)
 
Still trying to learn enough to program the IMUs to do things; found this
that may be useful.

Haven't yet been able to finish editing the skeleton model(s) enough to print them and build as a testbed to get the hardware installed.

Then I have to learn how to create code that will capture and log the data from multiple sensors at once, into a file (some kind of database; don't know what or how yet). Not very good at learning this stuff, so will probably be a long while at the present rate.
 
Tried to go back to learning some code by using some other project examples, but am again stonewalled. As usual, I can't even compile the project and have found no workarounds. These are the kinds of things that deter me from coding, because I can't understand why it doesn't work, and there's no options in the software to either tell me what's wrong, or to let me manually fix them in ways suggested by online posts for similar (or even identical) problems.


The present example is fairly unrelated to this thread's project, but have a cheap ($3) NodeMCU with a tiny OLED built in
1710718409888.png
and before I tried making it detect and display the angles from one of these MPU sensors (to see if I could learn to code something for that from scratch instead of just copying an example code set), I thought I'd try to get one of the various very simple (haha) ESP32 Oscilloscope projects (the first one of those listed below; there's several others out there too) adapted to it just to see if they would run on it at all, and show me something interesting on the display...and it might actually be useful, too.


But...the project requires several "includes" such as adc.h, which in turn has it's own includes like gpio.h... etc.

I managed to force the system to find the adc.h (which no version of is anywhere on my computer) by locating a version of it in an ESP32 library here
and just putting the file inside my "libraries" folder in a subfolder "ADC", and then using the ENTIRE LOCAL PATH to that file, since no shorter version would work, and the Arduino IDE is too stupid to bother checking for files anywhere you don't specifically completely locate for it.

But the technique didn't work for the sub-includes, even if I edit the adc.h file for the paths to them, even if I stick them in the same ADC folder and edit the adc.h file itself to point to those exact full paths.

I opened the adc.h file in the IDE and tried compiling just that, but even with those edits it won't compile, sticking on being unable to find gpio.h, even though I stuck that in there the same way I did the adc.h, and put the path in, etc. I wasted about four hours on internet searches and attempts to either use the fixes I found, adapt them to my system, or just try logical derivatives of them or of the things I'd done for the adc.h that worked. No luck.


Aside from my lack of ability to really grasp what functions can be done by what kinds of software commands (meaning, while I understand the idea behind them, I don't know all of the commands and how to build things with them...no, that's not really what I mean; I don't even know how to say what I mean). I don't know how to even conceptualize how to turn an idea I have about what needs to be done into actual code (not knowing what code does what, and finding that I don't understand what other people mean when they discuss what can be done to do these things), stuff like the above is what keeps me from doing anything about the software for this project, and sends me on some pretty deep depressions about it that keep me from coming back to it to even try things very often. :(


I usually end up closing all the coding stuff and opening SONAR and making music, because at least that is usually calming, if often difficult; I understand what's going on in there. I spent the last couple of months more on The Moon, It Read To Me, And It Was Bright than coding attempts, because as hard (and often frustrating) as it is to do the music, it's MUCH easier than the coding, and at least it provides some reward in that I can get results I can experience...all I get out of coding attempts are frustration and stress.

(I also always have...not sure what to call them. Brain-shifts; I have so many different things in my head that they don't all fit at once. Sometimes the one I am trying to work on has taken a trip to an alternate universe, and I can't access all the required info for it, can't get it all "back" in my head, so I have to work on something else until it comes back. (sometimes stress from daily events builds up and drives *all* of it away for a while.) That delays projects as much as any of the other things I've discussed here. :/ )


I understand what *can* be done...I know the things I generally need to do to do them, but I have no idea what actual code to use to do them. And I don't understand how to figure that out from scratch.

And even when I find existing code that seems to be able to do something I want to do, I usually can't even get it to compile to be able to run it and see what happens. (sometimes I get lucky and it just works, or I can hack at it with logical deductions and make it work on my hardware....but almost never does this happen).

I can't afford to pay a good dedicated programmer to do it, but that's the only option I can think of to get anywhere; I've been trying to learn actual coding for so long (and have actualy done coding for simple stuff many years ago, in BASIC and even assembly, so I understand the principles), with just frustration at every attempt to do even the simplest things with existing tutorials (most of the time even simple things like Hello World code examples don't even work for me, just copied and pasted in, and I can't see why not).

Because I need this project to happen, I won't give up on it...but I am stuck; I'll keep developing the ideas themselves, I'll keep working on the hardware (3d-printed "skeleton" models and then actual parts, motor drives, molds for parts that have to be cast, patterns to make the exterior coverings from, etc), and I'll keep poking at coding in the slim hope I'll get that "eureka" moment.


So...for the moment, at least, that leaves this project fairly dead-ended at this point.
 
I came across these projects linked below a while back, there is something special about wolves and they've inspired some very dedicated work.

If you're not yet steeped in Arduino what about moving to a higher level with python instead? The devt environment needs less setting up than for Arduino and python's expressiveness is better for data crunching and complexity.

Your project outline is very ambitious in the amount of detail to complete, even if it were all straightforward. Keeping with the high level theme, have you started at your own neural network to understand more about what it is you're getting from the wolfies? Maybe whatever you need most, responsiveness say, could be tackled early on in some direct way (such as the sounds adapting with handling), with your full vision following on after. (Sorry if I've missed this up-thread.)

WolfTronix Animatronic wolf head

TheMariday.com - Highbeam

multiwingspan

MicroPython - Python for microcontrollers

How To Use A Servo With Raspberry Pi Pico - YouTube
(Edit: moved link to micropython section)
 
Last edited:
( I'm putting this update post on my forum-disappearance here in the wolfy project thread as it is more relevant to it than any other, and didn't want to make a thread just for the one post. It is lengthy, but I spent a few days working this out before coming back here to post it, and more time trying to condense it down once I posted it here before actually submitting it--I just couldn't cut out any more so I tried to arrange it by importance to other people's likely reading order vs what was important to me.)


I'm still taking a break from the forum (more or less from the world in general, except for my dayjob I can't avoid), to de-stress and have more time to concentrate on trying to learn the things needed to move forward with this specific project. I have no idea how long this break will be, but until I get sufficiently far with the project, and other things get easier/better, I probably just won't have time or energy to be here except for infrequent updates to this thread, if I ever learn enough to do what's needed.

The original reason for starting the break was someone pointing out my inability to see that my behavior was wrong (being autistic on top of eternally exhausted doesn't make this easy) so I decided to walk away until I could figure out what I did wrong, and how to avoid it in the future, since it's not the first time I've offended and had no idea what I did wrong. I still have no idea how to do that, but after a few days I also realized I had other reasons to keep myself away (see the section below), and then shortly after that I was handed another mess by fate (see the section after the one below):


I still suck at figuring out the Arduino IDE programming environment (it's very badly designed if it can even *have* the problems that it does in letting people like me figure it out--i'm not a complete newbie, and I understand the basics, but the software is almost deliberately designed to make it as hard as possible to set things up and to fix issues, especially with dependencies--if even one little thing is wrong there it can take weeks or months to fix it because there is NO way to see all these things all at once, no one place for all settings, etc, and NO way to just point it at a folder that you KNOW has EVERYTHING in it, and let it figure it out, or globally change paths, etc).

I still don't yet know how to generate an accelerometer dataset that might be useful for someone (RideOn, etc) to see the generated data by the system from specific movements, and use that to create some form of software that would coordinate that into "responses" (and no idea how to do that myself--guessing at this rate it would take decades for me to learn on my own). (I also don't really know how the dataset will help, but I don't really need to know that, I guess--i just have to generate it...but by the time I figure out how to do that, I might as well just do the whole thing by myself so I don't have to do work that doesn't make sense in the context of the project).


This whole thing is very frustrating, because I know what the system as a whole needs to do, and what various subsystems need to do, and even how to build much of the hardware...but while I know the general blocking of what kind of coding needs to be done, I don't know how to code things to run that hardware or do the work the subsystems need...yet I also can see it shouldn't be that hard if I did know coding, and had the brain type that can do that stuff. If I could have a coding AI read my mind and take it from there, it would be great. :/ I apparently can't communicate with actual programmers; we just don't understand each other, so I don't know that any other method than some noob-calibrated AI will ever help..


The only "gratifying" parts of the project have been physically working with sculpting molds to cast various parts with, or reworking the prototype wolfy's face and paw coverings and shapes, etc., as those I can do whenever I can physically just manipulate things while laying or sitting there, and I hardly have to be able to think to do those, and I am usually left with some sense of accomplishment, instead of just deeper and deeper frustration that the coding and/or learning-software attempts always do. I should take some pics of the in-progress molds and the present prototype state and post them here for future reference. (I did attach some pics of the Schmoo; see below for context).



I'm also dealing with JellyBeanThePerfectlyNormalSchmoo's recent health issues, and this is much more stressful than with previous dogs because of what I went thru with some of those (especially Teddy).

The schmoo started having (still unexplained) grand-mal seizures some weeks back. They are complex (like Teddy's) in that they are multistage, and could also be cluster type (which can be very dangerous or even fatal as they were with Teddy, after she kept having them continuously for days).

The day of the first seizure the (expensive) emergency vet visit at First Pet to help figure that out didn't turn up anything useful (other than that it isn't valley fever, but everything else tested was inconclusive). Almost a week later while waiting on more info from the vet, she had another one, and then another half a day later, but I couldn't get the vet staff to let me talk to the vet to get the anti-seizure meds prescribed that she had said she would do if there were more seizures, and had to get the rescue (who are more used to dealing with these issues) to call and get this done, since the schmoo's seizures were happeing more and more frequently, from days apart down to just hours, and that's usually serious....

(....but the vet staff was just telling me i'd need to first go visit four different places (which I'd need referrals from the vet to go to, but I had no referrals yet, so couldn't go even if I had the time to do all this, waiting who knows how many days or weeks to get appointments, etc, instead of just watching JellyBean die of siezures within days or even hours. To talk to the vet they said I'd need to come all they way in just to have a non-emergency appointment and setup a totally new separate "account" for primary care vs emergency care, and then set up an apointment *after that* to start working on getting stuff done that might lead to getting a prescription to help with her ongoing seizures.

I gave up and hung up and called a different vet (Alta Vista) that was the only other one I trusted (not many are trustable anymore, and most around here nowadays can apparenly take weeks or months to even get an appointment, if they are even taking new patients), and they said they had an appointment slot available but refused to schedule us because she was having ongoing seizures, and said I had to resolve that first--what the frock do they think I'm trying to DO!!???? So I marked that vet off my list as frocking useless, and never to try to go there again (they were the ones that had saved Tiny first from her seizures, and later from Myasthenia Gravis, and a few other related problems along the way, so it was a shame they didn't actually want to be vets anymore and help dogs in need of care).

That left First Pet (the one I'd had to hang up on) as the only other trusted vet I knew of, so I contacted the rescue to see if they knew of another after I explained the above, and that's when they talked to them instead....)


So a quarter of that day later, the rescue let me know the vet had called in a prescription to a place I could get to easily and quickly, and a quarter of the day after that the pharmacy had it ready so I could go get it...coming home just as she had yet another seizure. But since starting the meds, she hasn't had another one. It has taken her more than a couple of weeks to get used to the meds enough to not be totally stoned all day, and she is sort of back to normal.


But on our first regular checkup visit after all that, while doing a belly ultrasound to see if there was any potential cause visible there, the vet thought she saw pyometria, so all the other stuff she was going to check got put off and that had to be dealt with by (very expensive) emergency surgery. It was going to take hours to prep her and get the surgery team ready, and I was totally exhausted since I'd had to work most of the day first before going to the vet and was too worried and stressed out to even rest or eat/drink all that time, so I had to go home and wait, since recovery would be at the vet's overnight anyway and I couldn't stay there for all that time. The surgeon called me just before the surgery started after they'd prepped her and sedated her, and said he'd just done another US to verify what he was going to need to do in there, and said he did not see pyometria, etc., but asked if I wanted to go ahead with a spay anyway to prevent the possibility of pyometria in future, and since she was already prepped, sedated, etc., I had htem go ahead, since it wouldn't cost much more to do that and less than the emergency pyo surgery, and they also would do the "tummy tack" to help prevent bloat in future as well since that's in the same area and wouldn't add much to the already high cost.

So now she's almost a couple weeks into recovery from all that, and due to go back in MOnday for recheck...and then we can continue figuring out what might be causing the seizures (even though they're presently under control, if it's not just late-onset epilepsy like with Tiny or Teddy, there are numerous other serious things that can cause it that might be treatable if detected soon enough).


I don't usually get much sleep anyway, even though for years now (especially after I lost Kirin and then Yogi, but even before that it really started downhill after the fire) I have had to spend almost all the time I am not at work laying in bed (even if using the computer for various things) trying to at least physically rest (and doze and wake repeatedly) to have enough energy to get up each workday to go earn a living. Since the above mess started, I can't run the white noise to block all the sounds out so I can stay asleep whenever I manage to get there (until whatever nightmare wakes me), because i need to hear if she is having problems to help if I can. So every little sound wakes me whenever I do doze off (which is why I've always used the white noise) and there are LOTS of little noises, all the time. I also have to wake up to the various alarms to give her her meds on time, so that's another interruption to whatever sleep I might get. Eventually after a week or three, exhaustion catches up with me and I sleep like the dead for a couple to few hours, then wake/doze/wake/doze until it's time to get up for work again.... (that part isn't new either, it's just altered-pattern from before).

Anyway, with even less sleep than before, I have even less time, energy, and brainpower to do anything useful, so what I have has to be spent on the things I *have* to do for myself (that no one else can or else will do), and I have to be selfish now and just do what *I* need to do instead of what everyone else wants or needs, since I don't have someone like me to help me with all that...just myself and there's not enough left of me to go around even for that. ;)


I would really like to have at least some version of the responsive wolfy working when I lose JellyBean, which will happen eventually no matter what happens with the current situation, since I don't have any other dogs and am not entirely sure it's a good idea to get another (I certainly want to...but I am so worn-out I don't know that I can provide one the attention it would deserve, and it's very unlikely I'll find another Kirin that just wants to just be there with me, and nothing else....the whole point of the wolfy project is to *make* a "dog" that would be exactly that, that won't get sick or suddenly die, etc, and could be "backed up" so even a total hardware failure could still be worked around by restoring to a new set of hardware).




If you made it thru all that, congratulations and apologies....
 

Attachments

  • 20240529_180632.jpg
    20240529_180632.jpg
    3.1 MB · Views: 1
  • 20240529_180519.jpg
    20240529_180519.jpg
    3.1 MB · Views: 1
  • 20240529_173803.jpg
    20240529_173803.jpg
    497.6 KB · Views: 1
  • 20240526_125031.jpg
    20240526_125031.jpg
    2 MB · Views: 1
  • 20240525_202057.jpg
    20240525_202057.jpg
    757.4 KB · Views: 2
  • 20240524_181659.jpg
    20240524_181659.jpg
    2 MB · Views: 0
  • 20240603_103942.jpg
    20240603_103942.jpg
    2 MB · Views: 1
Last edited:
You are a genuine sight for sore eyes bro.... just tickled pink to see your text🥲

I think I understand a 'dogs' needs... I've owned one since I don't remember when. My current 'Lady' needs dental cleaning... sadly, I now live about 100 mi round trip to the closest vet... and not only is she's not accepting new patients... I'm also being quoted $500 to $900. What to do.
 
Last edited:
Yes. Care for yourself first so you can help others later. The logic.

I can relate.There are many ways to do things being straight or mystical. Some can't communicate well enough. Like many things in our life they are learned and some stop learning or stay with the basics. If one have struggles one will ace its weaknesses thereby the reason psychologists tries to solve themselves with education before others. The same goes to all areas of expertice and I feel the worlds people nowadays are very specialized in one area. My doctor friend don't know math and economy more than basic, not even percentage. His wife in taxes dosn't either. People need help with basic old time stuff, change a bicycle or car tire. They need help.

I see that you are very fond of "The Wolf" project and all the things going on meanwhile really pushes you to your limit it seems.

Be well and don't forget to ask for help
 
Back
Top