Robotics Project: Snuggles, The Wolf

amberwolf

Administrator
Staff member
Joined
Aug 17, 2009
Messages
40,859
Location
Phoenix, AZ, USA, Earth, Sol, Local Bubble, Orion
Technical discussion and idea-workout thread for Snuggles, The Wolf, a robot intended for emotional support interaction.

Discussion of the project itself is welcome; ridicule or other negative feedback about the idea itself, etc is not. ;)

The project concept is pretty complex, so reading all of the technical posts is pretty much required to understand what's actually going on, and to know what is going to be required. (there are certain to be side-discussion posts here and there, but it should be quickly obvious which are those and which are technically related so you can skip them :oops:)


Primary purpose: The last decade or so, I've seen how many people (including myself for a short while after the housefire) out there who for one reason or another can't have an actual animal companion, but could benefit emotionally from having the interaction with one, so I've been very slowly learning bits and pieces about robotics to develop an open-source one that those capable could build, and those not capable could have someone build for them.

The intent is to be modular, so only those capabilities desired have to be built. For instance, if all one wants is it to respond to touch or hugs by making comforting cute sounds, just build the Proprioception Network and Sound Modules, and include those in the software on the MCUs.

The following posts will start describing the technical requirements and ideas for solving some of these, and start discussing how to implement these things.

Even with all my research, I still know very little about coding, robotics, etc., so I welcome all assistance with these things.

Readers may skip the next part as it is just background for why I need (not just want) to do this type of project, and doesn't directly affect any of the technical aspects.


I started this thread now because I have begun buying some really cheap robotics parts to experiment with and learn from, and also now have a 3D printer to make quite a bit of fabrication much easier than it previously would have been, and will be posting my results and experiments here for those wishing to learn (or just follow along and laugh internally at my naivete in these things).



Subsections of the project descriptions in following posts.



***********************************************************

A short non-technical background for the project (not necessary to read if you just want to help with it, but useful to understand why some parts of it have to exist or do what they must do, and why I am doing it at all):

I've had dogs nearly all my life, usually big ones (like the Saint Bernards I've had several of for the last decade-and-a-half), but like all living things, they die eventually. Most of the time I have more than one so I can get thru the loss of another by focusing on the remaining one(s), but there have been times when I did not have another or lost all of them at once, and had none to help me thru those times.

I have also always had homemade "wolfys": roughly wolf-shaped large plushy-kind-of-things to snuggle up with to help me sleep, or when I am very stressed out, as I am not a normal person and don't connect well with other people (I don't really understand them, and it's usually mutual). "Doctors" call it Asperger's or Autistic Spectrum; my brother calls it Neurodivergent. But it leaves me with a hole to fill, and not all dogs like the snuggles much or want to stay with me all the time (like JellyBeanThePerfectlyNormalSchmoo I have now). It's great when they do, but I wouldn't want to force them to just to help *me* out....So for those times, I keep the wolfys around--there's usually just one, but sometimes I have a second one to experiment with different ways of making one that would be better in some way without altering the one in use.

They're usually just minimally-poseable roughly-wolf-looking about-lifesized shapes, made more for how they feel than how they look. Some of you may have heard of weighted blankets, body pillows, etc., that are intended to help those that are comforted by the feeling of weight on them, or of holding something, etc. There are many variations on these; the wolfys are just another more complex version of those.

Wolf-shaped because I've always been interested in wolves since I was a kid, and I loved how I could see them snuggle up with each other (sometimes quite aggressively), even before I ever had a dog. So I started out with some big plushy my parents probably got for me (dont' remember, was really little), and as that wore out and I as I grew up I started making my own out old clothes and pillows and plushies from thrift stores, etc. At some point I got a cheap MP3 player with looping function and recorded sleeping-breathing noises into it, and put that and a speaker inside. But I've wanted some sort of interaction for a while now, and so started this project. (I first began the idea of doing it in the 1990s but robotics was too expensive to get into...nowadays it's practically dirt cheap, and even I can afford it).


I rarely talk about them because so many people just make fun of me as if I was a child, but I often feel like one and am always stressed out by other people, and my dayjob involves constantly interacting with many people, and then there's all those I interact with on the internet. Some of all these people are good, many are alright, but some are just not very nice people, and I have to deal with them anyway, with (as my mom once put it) my Lee Press-On Smile. Then I can freak out when I'm home and let the wolfy calm me down. (At work I wear a small wolfy plushie on my suspenders to help me with the most stressful moments each day...I wish I could keep the fullsize one with me (a real dog would be even better but that's simply not possible even if JellyBean (or whichever one I had at the time) were inclined to be snuggled and hugged at random times, and could stay with me all the time without "being a dog")).

(sometimes I have a dog that can do that, like Kirin did, or Lady a couple decades before, but it's rare).


Some pictures of the most recent versions, with one on the couch for scale. (I don't have pics of any older ones)

20240107_111355thumb.jpg 20230929_182506 - Copy.jpg Resized_Resized_20230530_163934.jpeg 20230929_175653 - Copy.jpg 5.jpg
 
Last edited:
Proprioception Network:

While I have a good idea of what this all needs to do, and what hardware is involved in doing it, I have almost no idea how to actually implement it's software, and could use a lot of help figuring that out.

It's the thing I need to do "first", the most important part of this project, because once it's done it should make it far easier to do the rest of it. (motion and such is relatively easy, there are a bunch of ways to do that sor tof thing that already have existing code and hardware...but I can't find anything that does what this PN should be able to do*****).


Use (MEMS) accelerometer units to create a network within a device, structure, framework, etc., such as a robot (or vehicle, building, powersuit, mobility aid, etc., but for this discussion, "robot" will be used), to allow location determination and motion tracking of all parts of itself relative to all of it's other parts, and to sense touch, vibration and impact, and localize each of these to the specific part of the robot they occur at.


The latter could be done via triangulation similarly to earthquake location determination. Measure the strength and time of a particular "sensation" at all the available sensor locations, comparing them and determining which one happened "first" and which is the strongest, with the best correlation of those two being called the "sensation source".

I don't know how to do this yet.


The results of this can then be mapped onto the following:

Motion / location tracking could be done by creating a volumetric space map with the center of the robot at the center of the map, and doing a form of "inertial tracking" of each sensor. This data can also be used over time to track speed, and be used to guide all of the motors used to move the robot, without any position sensors for individual motors.

I don't know how to do this yet, either.


Monitoring and creating this data probably won't be just a single central computer, but rather a network of smaller cheap MPUs that communicate with each other and a central unit that does the overall decision making. As if it had a motor-control / sense brain in each limb or other controllable part, that each does all the local detail control, and is just commanded to "move paw up" or "swish tail lightly" by the main brain in response to the sensor inputs.




Ideally eventually with the ability to learn new responses to all this sensory input, but with a manual editor to quickly and easily alter or remove unwanted responses (which you can't do with a real dog :lol: ).

I'd like to make a learn mode where a command is given to do something new, liike to teach it a specific motion, and it records the input from the sensors and plays taht back via motors to repeat it. Then it can "invent" very slight variations on that each time it does the motion, so it isn't so robotic.

Same thing for sounds--make a sound, and it "learns" what that sound is and imitates it, again making it's own variations. I expect this will be MUCH harder than the movement learning.



This PN is my primary initial challenge is that I am not even sure how to approach. Summary of above:

--make a spatial map of relative position of all parts of the robot, and track their motion and velocity

--use that data along with vibration data from each sensor mapped similarly to detect all touches, collisions

That can be used by the system control units*** to determine where everything is and how to move it, how it is moving vs what it's being commanded to do, and determining if the user is interfering / interacting with it so it can change it's response accordingly (eg: it puts up a paw and the user pushes the paw downward, the robot stops trying to raise the paw and instead "follows" the motion the user is inducing into it until the input stops. This would also trigger other responses such as "sad face", or a random disobedience where it swishes the paw sideways a bit and back up trying to do the motion it started out as, unless the user reinforces the original command verbally or re-grabs the paw, etc. )


*** probably setup as multiple levels of control, with a master brain that makes decisions based on the preprocessed data from sub-brains that actually receive and filter the incoming data, and then the master brain sends those decisions to further sub brains that command hardware to respond; each one being a separate MCU (ESP32, etc), probably with the master brain being a larger linux (or whatever) laptop computer.



***** I've looked at all sorts of touch sensors and the like, but none of them can reliably detect thru padding and/or fur (real or fake), and almost all of them are just on/off output, so not useful for most of the purposes I have...and I'd need hundreds of them all over the wolfy to detect *where* touches / etc are. They also all emit the same RF frequencies, for those based on that technology (the only ones that work through any thickness of anything), so have to be kept a fair distance from each ohter, making a network of them impractical at best.

There are microwave/radar motion sensors, but their outputs are also almost always on/off, and resolution so low and spillover so high that they can't be used in this fashion.

I could probably use a theremin-like system, which uses radio antennas, one vertical for volume and one horizontal for pitch, and human body capacitance to interfere with the signals to cause the electronics to respond (normaly used to emit sound, but there is at least one project that processes the data to create MIDI output instead). This could use long antennas as grids under the fur for different areas, but the frequencies chosen for each area must be different enough to not interfere, and it has a lot of technical challenges to do all the analog end of stuff...I hate RF design and implementation and troubleshooting, because everything interacts and problems are hard to solve. :(


So that's why I'd like to go with the accelerometers...every sensor is isolatable, input and output, and well-characterized in how these things work, there's plenty of codebase out there for various input monitoring functions, and it should be "simple" to process the data from a few dozen or less such sensors and create a virtual "image" of the wolfy and the varying inputs.
 
Last edited:
Mehcanical structure; a basic frame emulating a very simplified form of a wolf skeleton. (random google-image-search results attached below).

1703546827554.png1703546915624.png1703546863262.png

It does not have to walk around--simply being able to "sit up", roll over, wave all it's paws around in various cute interactive ways, move it's tail around, move it's head in a "normal" way, open and close the mouth in sync with sounds, and *maybe* be able to pull a tongue in and out in sync with panting / breathing sounds is the extent of motion for hte main body.

The face i would really like to make realistically expressive, but I have no idea how to do that without making it fairly delicate and not very snuggleable, and that last is a key feature of the entire project. I think I can do basic ear movements and maybe pivoting eyes back and forth without much worry, but the facial structure movements I am not confident of a solution yet.

Most joints will be rose-heim types for limited lateral movement in addition to their rotational movement, to provide slightly more realistic movement ability, and avoid damage from handling or from big dog(s) sitting or laying on the robot, which is pretty likely to happen. Ball joints for more 3d-rotatable ones like shoulders and hips.

Neck and spine will be some form of structure like the large conduit spiral tube, with a flexible foam core and control and tension wires inside that. i've already experimented using actual electrical spiral conduit tube something like this
1703549144803.png
and it's flexible enough, but if you repeatedly flex and twist it it can come unwrapped. So it will need a wrapper to prevent that; I have not yet examined the PVC-covered versions but they might work. A dense foam core will help keep it from doing this as well, and hold the control cables to the outer edges. But probably the cables have to go on the outside, as the muscles do on an actual spine, and I'd 3D-print some mounting rings to go every couple of inches up the tube to fix the cables to so they can pull the torso in different directions by compressing the spine on one side and letting it expand on the other, etc.

Probably won't have individual ribs, just a hollow maybe somewhat flexible properly-shaped structure for the main speaker (eq'd for bass) and some other electronics to go into. Another speaker goes in the head (eq'd for treble). This system has been used in previous non-robotic versions for simple breathing sounds; a separate post will have more detail.

I would *like* to give it some form of breathing motion synchronized with the sounds it makes, but anything beyond a simple clamshell with a linear actuator between the two to push them apart and pull them together is probably more complicated than I would be able to build for the time being. The clamshell would hinge at the spine and expand at the "sternum". The linear actuator would need to be fairly fast, to emulate panting, for instance, but if only used for steady slow sleep breathing it could be pretty slow.

Motor actuation of the major movable parts would probably have to be done via cable from outside the body, to keep the snuggleable aspect maximized. If I put all the motors and actuators inside it, aside from the robot-sound it would give it that I do not want, it would not be very soft and squishy and squeezable.

I've already experimented with having a "full skeleton" inside a version of this and it's snuggleable enough, as much as an actual dog would be, so that should be ok. But when I count up the motors and actuators and electronics to drive them...it would easily fill up all the spaces inside, and for a fairly lean but fuzzy wolf look it wouldn't leave enough padding for squishiness, even though this will be about as big as a person. (and it will also allow for scaling it down to whatever size someone wants it to be, for ohters that don't want something that big).

So the cables would be something like bicycle brake/shifter cables, but hopefully much smaller outer housing diameter. Or perhaps the cables all feed thru a common housing section from the external control box to the wolfy, and at either end spread out in separate housings to the actuators and to the individual moving parts of the wolfy. Otherwise it will be a pretty large bundle from the tail end of the wolfy to the ECB.

Cables for most parts can be moved via common RC Servos, which can be relatively cheap and easy to control and use. I only need two servos for any ball joint, and only one for any rose-heim pivot.

The main torso movements will probably have to be done using stepper motors or something else inexpensive with a lot of torque.
 
Last edited:
Sounds:

The speakers are simple, but the sound generation is not, really. Ideally I would liike something that generates wolfy sounds similarly to how speech synthesis generates human language output, so it isn't just playback of existing sounds (which I can already do with simple "MP3 player" boards, even using different switches under the "fur" for different sounds) but new ones synthesized "on the spot" based on a database of possible responses vs possible input triggers from the proprioception network.

As previously noted I'd like a learning mode where the user makes sounds (or plays them from a recording, etc), and the wolfy mimics these in it's own way to make them as wolf-like as possible, rather than just using a bunch of varying-quality recorded sounds (most of which I would have to gather from the internet, snip, clean up, etc., and which would not sound very realistic as they're pretty inconsistent in sound (have already done this for a long time).

I honestly have almost no idea how to do this, other than creating "phonemes" of some type that it can blend together...but the sources for these would be recordings, and that's still an issue. I've done plenty of sound synthesis for music, but I don't know how to use any of the synths I have to generate the kinds of sounds dogs and wolves make with any amount of realism.
 
Last edited:
I'd also like to have a minimal form of vision that can recognize a face (it doesn't have to differentiate between people, just know if there is one and where it is). Be nice if it could also recognize basic emotional expressions; I've seen chips/boards on aliexpress that claim to be able to do this, but haven't done any research to find out if they are real or how well they work yet. There might be OS code that can do this on common MCUs or even regular PCs. If the camera can also do object recognition and space mapping that'd be nice, too, but that's not really required.

The cameras would ideally be in the actual eyes, as they are certainly small enough nowadays (I have an ESP32CAM that the entire camera/MCU/wifi/etc thing could almost fit in there :lol: ).
 
Last edited:
There will also be a pair of mics (probably in the ears) with associated MCUs to process incoming sounds, to respond to sound or lack of it appropriately. For instance detecting commands to "sit up", or "shake", "stop it", etc., and all the many variations of these things "conversationally used". But also detecting when it's quiet so it "goes to sleep", and when it suddenly gets no longer quiet it can "wake up and get excited" that something is happening, or otherwise respond to the change. Etc.
 
Last edited:
This post will document a preliminary setup with one MPU-6050 accelerometer board on one of the MCUs, to detect it's orientation, movement, and vibration / touch detection. I'll edit things into it as i go to keep track of the useful results; the actual posts later in the thread will contain the individual experiments but code, schematics, etc., will go here.

When I have a working single-sensor setup, then I'll use two of them to figure out how to detect *relative* position and orientation, and begin figuring out how to tell *where* a touch originates from between the two sensors. (a start on triangulation methods).
 
Last edited:
Thanks. :)

Now that I have a bunch of the things i need to do stuff with (especially that printer!), and have figured out a possible path forward on this one, if I could just find enough energy and time (at the same time) to do this (and all my other projects)....



Those MPU-6050 3-axis accelerometers (specs at end)
should arrive between January 4th thru 8th along with some other stuff like an ESP dev board-plus-breakout-board and another breakout board for the toher ESPs I already have, and then I can begin experimenting with them. First I just want to see how sensitive they will be if attached to the "skeleton" underneath the padding and fur, to vibration and touch, since that's the most important part I need them for. I'm sure they'll do fine as gyros and acclerometers for postion and motion sensing, and I'm sure there's a bajillion codebases out there for that.

For later reference, some of those codebases:


The touch sensing and triangulation are the parts I don't know if they'll work well enough for, or if there are codebases to do this (my guess is I'll be on my own for that). So that's the first thing I need to see about--if these won't work for that, I'll have to research which ones might be sensitive enough to do that, that I can afford enough of, since I might need as many as a couple of dozen (or more) scattered around the whole wolfy body and limbs and head for sufficient sensitivity and triangulation accuracy.
1703657327636.png


The INMP441 MEMS microphones arrived today; they're very small (size and thickness of a dime or less--actual sensor is MUCH smaller than the PCB) and should easily fit wherever I need to put them. Should easily be concealable under the fur within the external ears, if they can't be installed inside the "internal" ears, for instance, and not interfere with snuggliness, without making much of a lump (unlike an electret or other mic).
1703658146514.png
See details at end of post.


Yesterday and the day before, I started learning how to access the camera on the ESP32-CAM-MB, but haven't yet gotten an image from it (all the example code I found uses webservers and wifi, and for whatever reason I can't get my router and the ESP to talk; pretty sure it's the security stuff but am still figuring out how to even test for that). I'm hoping to use this (or something like it) to do the image capture and at least pre-processing for object recognition...or at least learning how to do that sort of thing on something more powerful like a full Raspberry Pi, or sending it all out to the Linux laptop to process and then the response commands back to the onboard MCUs inside Snuggles.
1703658519217.png




MEMS microphone details
roduct introduction:

The INMP441 is a high performance, low power, digital output, omnidirectional MEMS microphone with bottom port. The

The complete INMP441 solution consists of a MEMS sensor, signal conditioning, analog to digital converter, anti-aliasing filter, power management and industry standard 24-bit I2S interface. The I2S interface allows the INMP441 to be directly connected to digital processors such as DSPs and microcontrollers without the need for an audio codec for use in the system. With a high signal-to-noise ratio, the INMP441 is an excellent choice for near-field applications. The INMP441 has a flat wideband frequency response that results in high definition of natural sound.


Product Features:
1. Digital I2S interface with high precision 24-bit data
2. High signal to noise ratio is 61 dBA
3. High sensitivity - 26 dBFS
4. Stable frequency response from 60 Hz to 15 kHz
5. Low power consumption: low current consumption 1.4 mA
6. High PSR: -75 dBFS

Interface definition:
SCK: Serial data clock for I2S interface
WS: Serial data word selection for I2S interface
L/R: Left/Right channel selection.
When set to low, the microphone outputs a signal on the left channel of the I2S frame.
When set to high level, the microphone outputs signals on the right channel
SD: Serial data output of the I2S interface.
VCC: Input power, 1.8V to 3.3V.
GND: power ground
This product provides tutorials for using ESP32 modules with I2S functionality.

Connect to ESP32:
INMP441 ESP32
·SCK >> GPIO14
·SD >> GPIO32
·WS >> GPIO15
·L/R >> GND
·GND >> GND
·VDD >> VDD3.3

What is the difference between an omnidirectional microphone and a unidirectional microphone:
First of all, I understand that the sound divergence is all-round. One person speaks and diverge in the air.
As for how far it can pass, it depends on many factors.
1, directional microphone, the general pickup radius is very small, 30 cm is a big one, single use, put on the mouth, the sound quality is very good, because it is not picked up again.
The shape is generally gooseneck, a rod, a curved rod, to your mouth, directivity, just listen to your voice.
With a single-point microphone, the sensitivity is limited to a certain range. Sensitivity is too high, causing sharp whistling, etc., involving many other devices
As for why not make a big pickup radius, there is a reason. Later
2, omnidirectional microphone, the general pickup radius is very large, one meter two meters three meters five meters have, compared with ten centimeters, this level is very different, it is not just picking up a person's voice, but Multiple people, five six, seven eight or even ten.
Omnidirectional microphones are highly sensitive,
The omnidirectional microphone has a too large pickup radius and will also pick up more ambient noise, so the sound quality is not as good as a single-pointed microphone unless you do a good noise reduction. There is definitely a difference between collective and single-person services.
But omnidirectional wheat has a benefit, the wiring is simple, not one by one. Suitable for small meetings, ad hoc meetings, some remote discussions, remote operations.
3, single-point wheat generally supports local amplification,
Omnidirectional microphones generally do not support local amplification. It should be noted that because of such high sensitivity, the local amplification whistling is very scary, and the current technology is difficult to deal with.


MEMS 3-axis accelerometer details
  • Versatile Voltage Compatibility: The MPU-6050 module operates at different voltage levels, with VDD ranging from
  • Efficient Motion Processing Library: InvenSense's motion processing library efficiently handles complex motion sensing data, reducing the computational load on operating systems and providing developers with well-structured APIs.
  • Integrated 6-axis Motion Processing: The MPU-6050 module combines a 3-axis gyroscope and a 3-axis accelerometer in a single package, eliminating axis alignment issues and saving valuable space.
  • Wide Range of Sensing Capabilities: The MPU-6050 module offers precise angular velocity sensing with selectable full-scale range options of ±250, ±500, ±1000, and ±2000°/sec (dps). The user-programmable accelerometer also provides a full-scale range selection of ±2g, ±4g, ±8g, and ±16g. It supports high-speed transmission up to 400kHz via I2C or up to 20MHz via SPI.
  • Digital Motion Processor: With the built-in Digital Motion Processor (DMP) hardware acceleration engine, the MPU-6050 module can connect to other brand accelerometers, magnetometers, or sensors via the second I2C port, offering comprehensive 9-axis melt algorithms.
  • MPU-6050 Module Accelerometer Gyro Sensor GY-521 Three-Axis Acceleration Gyroscope 6DOF Module 3-5 V IIC Communication Protocol
    Feature:
    1. Integrated 6-axis Motion Processing:
    The MPU-6050 module combines a 3-axis gyroscope and a 3-axis accelerometer in a single package, eliminating axis alignment issues and saving valuable space.
    2. Digital Motion Processor: With the built-in Digital Motion Processor (DMP) hardware acceleration engine, the MPU-6050 module can connect to other brand accelerometers, magnetometers, or sensors via the second I2C port, offering comprehensive 9-axis melt algorithms.
    3. Efficient Motion Processing Library: InvenSense's motion processing library efficiently handles complex motion sensing data, reducing the computational load on operating systems and providing developers with well-structured APIs.
    4. Wide Range of Sensing Capabilities: The MPU-6050 module offers precise angular velocity sensing with selectable full-scale range options of ±250, ±500, ±1000, and ±2000°/sec (dps). The user-programmable accelerometer also provides a full-scale range selection of ±2g, ±4g, ±8g, and ±16g. It supports high-speed transmission up to 400kHz via I2C or up to 20MHz via SPI.
    5. Versatile Voltage Compatibility: The MPU-6050 module operates at different voltage levels, with VDD ranging from 2.5V±5%, 3.0V±5%, or 3.3V±5%, and VDDIO for logic interface at 1.8V±5%. Its revolutionary QFN package measures 4x4x0.9mm, making it compact and suitable for various applications. Additional features include a built-in temperature sensor and an oscillator with ±1% variation in operating environments.


    Specification:
    Name: MPU-6050 module (three-axis gyroscope+three-axis acceleration)
    Using chip: MPU-6050
    Power supply: 3-5 V (internal low dropout regulator)
    Communication method: Standard IIC communication protocol
    The chip is equipped with a built-in 16bit AD converter and 16 bit data output
    Gyroscope range: ± 250 500 1000 2000 °/s
    Acceleration range: ± 2 ± 4 ± 8 ± 16g


ESP32CAM details
The heart of the ESP32-CAM is an ESP32-S System-on-Chip (SoC) from Ai-Thinker. Being an SoC, the ESP32-S chip contains an entire computer—the microprocessor, RAM, storage, and peripherals—on a single chip.
The ESP32-CAM equips the ESP32-S surface-mount printed circuit board module from Ai-Thinker. It is equivalent to Espressif’s ESP-WROOM-32 module (same form factor and general specifications).
The ESP32-S contains a Tensilica Xtensa® LX6 microprocessor with two 32-bit cores operating at a staggering 240 MHz! This is what makes the ESP32-S suitable for intensive tasks like video processing, facial recognition, and even artificial intelligence.

Memory is paramount for complex tasks, so the ESP32-S has a full 520 kilobytes of internal RAM, which resides on the same die as the rest of the chip’s components.
It may be inadequate for RAM-intensive tasks, so ESP32-CAM includes 4 MB of external PSRAM (Pseudo-Static RAM) to expand the memory capacity. This is plenty of RAM, especially for intensive audio or graphics processing.
All these features amount to nothing if you don’t have enough storage for your programs and data. The ESP32-S chip shines here as well, as it contains 4 MB of on-chip flash memory.

The OV2640 camera sensor on the ESP32-CAM is what sets it apart from other ESP32 development boards and makes it ideal for use in video projects like a video doorbell or nanny cam.
The OV2640 camera has a resolution of 2 megapixels, which translates to a maximum of 1600×1200 pixels, which is sufficient for many surveillance applications.

The ESP32-CAM comes with an on-board PCB trace antenna as well as a u.FL connector for connecting an external antenna. An Antenna Selection jumper (zero-ohm resistor) allows you to choose between the two options.

Technical Specifications​

  • Processors:
    • CPU: Xtensa dual-core 32-bit LX6 microprocessor, operating at 240 MHz and performing at up to 600 DMIPS
    • Ultra low power (ULP) co-processor
  • Memory:
    • 520 KB SRAM
    • 4MB External PSRAM
    • 4MB internal flash memory
  • Wireless connectivity:
    • Wi-Fi: 802.11 b/g/n
    • Bluetooth: v4.2 BR/EDR and BLE (shares the radio with Wi-Fi)
  • Camera:
    • 2 Megapixel OV2640 sensor
    • Array size UXGA 1622×1200
    • Output formats include YUV422, YUV420, RGB565, RGB555 and 8-bit compressed data
    • Image transfer rate of 15 to 60 fps
    • Built-in flash LED
    • Support many camera sensors
  • Supports microSD card
  • Security:
    • IEEE 802.11 standard security features all supported, including WFA, WPA/WPA2 and WAPI
    • Secure boot
    • Flash encryption
    • 1024-bit OTP, up to 768-bit for customers
    • Cryptographic hardware acceleration: AES, SHA-2, RSA, elliptic curve cryptography (ECC), random number generator (RNG)
  • Power management:
    • Internal low-dropout regulator
    • Individual power domain for RTC
    • 5μA deep sleep current
    • Wake up from GPIO interrupt, timer, ADC measurements, capacitive touch sensor interrupt

Schematic and Datasheets​

ESP32-CAM Datasheet
ESP32-CAM schematic diagram
OV2640 Camera Datasheet
Power Pins There are two power pins: 5V and 3V3. The ESP32-CAM can be powered via the 3.3V or 5V pins. Since many users have reported problems when powering the device with 3.3V, it is advised that the ESP32-CAM always be powered via the 5V pin. The VCC pin normally outputs 3.3V from the on-board voltage regulator. It can, however, be configured to output 5V by using the Zero-ohm link near the VCC pin.
GND is the ground pin.
GPIO Pins The ESP32-S chip has 32 GPIO pins in total, but because many of them are used internally for the camera and the PSRAM, the ESP32-CAM only has 10 GPIO pins available. These pins can be assigned a variety of peripheral duties, such as UART, SPI, ADC, and Touch.
UART Pins The ESP32-S chip actually has two UART interfaces, UART0 and UART2. However, only the RX pin (GPIO 16) of UART2 is broken out, making UART0 the only usable UART on the ESP32-CAM (GPIO 1 and GPIO 3). Also, because the ESP32-CAM lacks a USB port, these pins must be used for flashing as well as connecting to UART-devices such as GPS, fingerprint sensors, distance sensors, and so on.
MicroSD Card Pins are used for interfacing the microSD card. If you aren’t using a microSD card, you can use these pins as regular inputs and outputs.
ADC Pins On the ESP32-CAM, only ADC2 pins are broken out. However, because ADC2 pins are used internally by the WiFi driver, they cannot be used when Wi-Fi is enabled.
Touch Pins The ESP32-CAM has 7 capacitive touch-sensing GPIOs. When a capacitive load (such as a human finger) is in close proximity to the GPIO, the ESP32 detects the change in capacitance.
SPI Pins The ESP32-CAM features only one SPI (VSPI) in slave and master modes.
PWM Pins The ESP32-CAM has 10 channels (all GPIO pins) of PWM pins controlled by a PWM controller. The PWM output can be used for driving digital motors and LEDs.
For more information, refer to our comprehensive ESP32-CAM pinout reference guide. This guide also explains which ESP32-CAM GPIO pins are safe to use and which pins should be used with caution.

Setting Up the Arduino IDE​

ESP32-CAM Example 2 : Live Video Streaming Server​

Let’s try running the CameraWebServer sketch. This sketch transforms the ESP32-CAM into a full-featured webcam, complete with features like face detection and a ton of customization options. This is a very impressive example of what the ESP32-CAM can do!
You can find this example under File > Examples > ESP32 > Camera > CameraWebServer.
We’ll need to make a few changes to get this to work with your ESP32-CAM.
You must choose the appropriate camera model. Because we’re using the AI-THINKER model here, uncomment it and comment all the other models.
// ===================
// Select camera model
// ===================
//#define CAMERA_MODEL_WROVER_KIT // Has PSRAM
//#define CAMERA_MODEL_ESP_EYE // Has PSRAM
//#define CAMERA_MODEL_ESP32S3_EYE // Has PSRAM
//#define CAMERA_MODEL_M5STACK_PSRAM // Has PSRAM
//#define CAMERA_MODEL_M5STACK_V2_PSRAM // M5Camera version B Has PSRAM
//#define CAMERA_MODEL_M5STACK_WIDE // Has PSRAM
//#define CAMERA_MODEL_M5STACK_ESP32CAM // No PSRAM
//#define CAMERA_MODEL_M5STACK_UNITCAM // No PSRAM
#define CAMERA_MODEL_AI_THINKER // Has PSRAM
//#define CAMERA_MODEL_TTGO_T_JOURNAL // No PSRAM
// ** Espressif Internal Boards **
//#define CAMERA_MODEL_ESP32_CAM_BOARD
//#define CAMERA_MODEL_ESP32S2_CAM_BOARD
//#define CAMERA_MODEL_ESP32S3_CAM_LCD
Next, you need to tell the ESP32-CAM about your wireless network. Fill in the following variables with your network credentials:
const char* ssid = "REPLACE_WITH_YOUR_SSID";
const char* password = "REPLACE_WITH_YOUR_PASSWORD";
The code that needs to be modified is highlighted in yellow.
The code is now ready to be uploaded to the ESP32-CAM.

Accessing the Video Streaming Server​

Once you have uploaded the sketch, open the serial monitor at baud rate 115200 and press the Reset button on the ESP32-CAM. You should see the IP address in the Serial Monitor.
Launch a browser and enter the IP address shown on the serial monitor. Ensure that the web browser is on the same network that the ESP32-CAM is connected to.
The ESP32-CAM should display a web page. To begin video streaming, press the Start Stream button.
You can play around with various camera settings in the left pane. For instance, you can alter the video’s resolution and frame rate, as well as its brightness, contrast, saturation, and the like.
Simply hit the Get Still button to take a picture. Note that images are downloaded to the computer rather than being saved to the microSD card.





1703658967560.png 1703658623932.png 1703658661303.png 1703658677946.png
 
Last edited:
Some of the other bits meant more for experimentation and learning, and probably wont' see final use in the actual Snuggles prototype are things like a "battery shield", as the wolfy is not a portable device and even if it was would need far more power than I could get out of this. ATM all I have is a single 18650 out of a solar-powered motion-detecting light I haven't yet installed outside, and it works to charge it, and produces both 5v and 3.3v from the charged cell correctly. Havent' run it down to see if it shuts off at LVC (HVC works during charging).

1703660482504.png

1703660642812.png 1703660669859.png



A smaller battery shield intended to solder wires to a single Li-Ion cell. So far only tested to power on and light it's power LED (t flickers the charge LED but can't proceed as I don't have a cell connected yet):
You can use D1 mini without usb cable, it's really wireless now.Put D1 mini anywhere, using this battery shield and a lithium Battery.
  • Boost, just connect lithium Battery to this shield, power supply for the entire D1 mini system
  • Charging, when the battery run out, you do not need to unplug the battery, just using usb port to charging.

  • 1. Connect to lithium Battery (normal 3.3-4.2V)2. Charging port (normal 5V)3. Green LED, lights when charging is completed4. Red LED, lights when charging
V1.2.0
  • New PH2-2.0mm battery connector.
  • 1A resettable fuse
    • Charging Voltage: max: 10V, recommend: 5V
    • Charging Current: max: 1A
    • Lithium Battery voltage: 3.3-4.2V
    • Boost Power Supply: 5V(max: 1A)
1703660927065.png


AHT10 temperature sensor, which I'll probably put in the nose or muzzle, so that when it's not being interacted with in an overriding way, then if it detects high room temperatures, it "pants", or rolls over on it's back to spread out and "cool off", and if it detects low temperatures, it curls up and tucks it's tail over it's nose.

AHT10 Digital Temperature and Humidity Sensor Measurement Module I2C Communication 1.8V-6.0V High Precision Replace DHT11SHT20​

The AHT10, a new generation of temperature and humidity sensors, sets a new standard in size and intelligence: it is embedded in a dual-row flat leadless SMD package for reflow soldering with a 4 x 5mm bottom and a height of 1.6mm. The sensor outputs a calibrated digital signal in standard I 2C format. 1. Module size: 16*11 mm
2. Interface type: I2C
3. Working voltage: 1.8 - 6.0 V
4. Interface size: 4*2.54mm pitch
5. Humidity accuracy: typical ± 2%
6. Humidity resolution: 0.024%
7. Temperature accuracy: typical ± 0.3 ° C
8. Temperature resolution: Typical 0.01 °C
9. Working temperature: -40°C--85°C
1703661359937.png 1703661375058.png

Couple of stepper drivers A4988 DRV8825

This product is a breakout board for Allegro's A4988 DMOS Microstepping Driver with Translator and Overcurrent Protection; please read the A4988 datasheet carefully before using this product.
This stepper motor driver lets you to operate bipolar stepper motors in full-, half-, quarter-, eighth-,
and sixteenth-step modes, with an output drive capacity of up to 35 V and 2 A.
The translator is the key to the easy implementation of the A4988.
Simply inputting one pulse on the STEP input drives the motor one microstep.
There are no phase sequence tables, high frequency control lines, or complex interfaces to program.
The A4988 interface is an ideal fit for applications where a complex microprocessor is unavailable or is overburdened.

Feature:
Simple step and direction control interface
Five different step resolutions: full-step, half-step, quarter-step, eighth-step, and sixteenth-step
Adjustable current control lets you set the maximum current output with a potentiometer,
which lets you use voltages above your stepper motor's rated voltage to achieve higher step rates
Intelligent chopping control that automatically selects the correct current decay mode (fast decay or slow decay)
Over-temperature thermal shutdown, under-voltage lockout, and crossover-current protection
Short-to-ground and shorted-load protection

Warning:
Connecting or disconnecting a stepper motor while the driver is powered can destroy the driver.
(More generally, rewiring anything while it is powered is asking for trouble.)
The DRV8825 stepper motor driver carrier is a breakout board for TI's DRV8825 microstepping bipolar stepper motor driver. The module has a pinout and interface that are nearly identical to those of our A4988 stepper motor driver board, so it can be used as a higher-performance drop-in replacement for those boards in many applications.

The DRV8825 features adjustable current limiting, overcurrent and overtemperature protection, and six microstep resolutions (down to 1/32-step). It operates from 8.2 - 45 V and can deliver up to approximately 1.5 A per phase without a heat sink or forced air flow (rated for up to 2.2 A per coil with sufficient additional cooling).


1)Simple step and direction control interface

2)Six different step resolutions: full-step, half-step, 1/4-step, 1/8-step, 1/16-step, and 1/32-step

3)Adjustable current control lets you set the maximum current output with a potentiometer, which lets you use voltages above your stepper motor's rated voltage to achieve higher step rates

4)45 V maximum supply voltage

5)Built-in regulator (no external logic voltage supply needed)

6)Can interface directly with 3.3 V and 5 V systems

7)Over-temperature thermal shutdown, over-current shutdown, and under-voltage lockout

8)Short-to-ground and shorted-load protection

9)4-layer, 2 oz copper PCB for improved heat dissipation

10)Exposed solderable ground pad below the driver IC on the bottom of the PCB

11)Module size, pinout, and interface match those of our A4988 stepper motor driver carriers in most respects (see the bottom of this page for more information)



This product is a carrier board or breakout board for TI's DRV8825 stepper motor driver; we therefore recommend careful reading of the DRV8825 datasheet (1MB pdf) before using this product. This stepper motor driver lets you control one bipolar stepper motor at up to 2.2 A output current per coil (see the Power Dissipation Considerations section below for more information). Here are some of the driver's key features:

Simple step and direction control interface
Six different step resolutions: full-step, half-step, 1/4-step, 1/8-step, 1/16-step, and 1/32-step
Adjustable current control lets you set the maximum current output with a potentiometer, which lets you use voltages above your stepper motor's rated voltage to achieve higher step rates
Intelligent chopping control that automatically selects the correct current decay mode (fast decay or slow decay)
45 V maximum supply voltage
Built-in regulator (no external logic voltage supply needed)
Can interface directly with 3.3 V and 5 V systems
Over-temperature thermal shutdown, over-current shutdown, and under-voltage lockout
Short-to-ground and shorted-load protection
4-layer, 2 oz copper PCB for improved heat dissipation
Exposed solderable ground pad below the driver IC on the bottom of the PCB
Module size, pinout, and interface match those of our A4988 stepper motor driver carriers in most respects (see the bottom of this page for more information)
We also carry a DRV8824 stepper motor driver carrier that can serve as a direct substitute for the DRV8825 carrier when using lower-current stepper motors. The DRV8824 can only deliver up to 0.75 A per coil without a heat sink (1.2 A max with proper cooling), but it has larger current-sense resistors that allow for better microstepping performance than the DRV8825 carrier at low currents. The only way to tell our DRV8824 carrier apart from the DRV8825 carrier is by the markings on the driver IC; if you have a mix of the two, you might consider marking them (there is a blank square on the bottom silkscreen you can use for this). For lower-voltage applications, consider our pin-compatible DRV8834 carrier, which works with motor supply voltages as low as 2.5 V.

1703661446309.png 1703661484668.png 1703661589788.png


An I2C expansion board to give more independent channels for an MCU with only one, for instance CA9548A CJMCU- 9548 TCA9548

CJMCU- 9548 TCA9548A 1-to-8 I2C 8 -way multi-channel expansion board IIC module development board​

You just found the perfect I2C sensor, them to your when you realize "Uh oh, this chip has a fixed I2C address, and from what I know about I2C, you can not have Two devices with the same same on the same SDA / SCL pins! "Are you out of luck? You would be, if you did not have this ultra-cool CJMCU-9548 1-to-8 I2C multiplexer!
Finally, a way to get up to 8 same-address I2C devices hooked up to one microcontroller - this multiplexer acts as a gatekeeper, shuttling the commands to the selected set of I2C pins with your command.
Using it is fairly straight-forward: the multiplexer itself is on I2C address 0x70 (but can be adjusted from 0x70 to 0x77) and you simply write a single byte with the desired multiplexed output number to that port, and bam - any future I2C packets Will get sent to that port. In theory, you could have 8 of these multiplexers on each of 0x70-0x77 addresses in order to control 64 of the same-I2C-addressed-part.
we put this nice chip on a breakout for you so you can use it on a breadboard with capacitor, and pullups and pulldowns to make usage a snap. Some header is required and once soldered in you can plug it into a Solder the chip is very easy to use it with any logic level.

Product Features
  • The adapter is connected to an I2C device with multiple identical addresses on the same I2C port on the main control board for normal communication and use.
  • There are 8 I2C interfaces on one expander, and there are eight bidirectional transfer switches that can be controlled by I2C bus. This means that after 1 expander is transferred, 8 identical addresses can be connected to the same I2C port. The device is also in normal use. The default address of this expander is 0x70, the address can be changed, and can be set within 0x70~0x77 according to your needs. In theory, 8 expanders can be cascaded in the address of 0x70~0x77, and each expander can connect 8 devices with the same address. This realizes the same I2C port and connects 64 devices with the same address and uses them normally. Very powerful!
  • Compatible with IIC bus and system management bus (SMBus), active low reset input, support for hot insertion, low standby current, no glitch during power-up, support for voltages between 1.8V, 2.5V, 3.3V and 5V buses Level shifting.
  • Depending on the contents of the programmable control registers, any single SCn/SDn channel or combination of channels can be selected. When a timeout or other improper operation occurs, the system master can reset the TCA9548A by asserting the /RESET input low. Similarly, a power-on reset deselects all channels and initializes the I2C/SMBus state machine. Reset and initialization can also be achieved by asserting /RESET and there is no need to power down the part. The switch has a turn-on gate so that the VCC pin can be used to limit the maximum high voltage through the TCA9548A. Limiting the maximum high voltage allows different bus voltages to be used on each channel pair so that 1.8V, 2.5V or 3.3V components can communicate with 5V components without any additional protection, for each channel, external pull-up resistors The device pulls up the bus voltage to the desired voltage level. All I/O pins are 5V withstand voltage.
    Product Parameters
    • Working voltage: 1.65V~5.5V
    • IIC clock frequency: 0 ~ 400KHz
    • Working environment temperature: -40 ° C ~ 85 ° C
    • Small board size: 31.5mm*21.4mm with two M3 mounting holes
    Instructions For Use
    The /RESET, SDA, SCL, A0, A1, and A2 pins on the small board are connected to the pull-up 10K resistor, no need to connect externally.

1703661756282.png 1703661793898.png 1703661818075.png 1703661831813.png 1703661843728.png

A servo expansion PWM board PCA9685PW

Want to make a hexapod walker? Maybe you're making a piece of art with tons of moving parts, or you need to drive a ton of LEDs with precise PWM output. Your microcontroller has a limited number of PWM outputs, and you find yourself running out! Not with the Adafruit 16-Channel 12-bit PWM/Servo Driver - I2C interface. With this pwm and servo driver breakout, you can control 16 free-running PWM outputs with just two pins! Need to run more than 16 PWM outputs? No problem. Chain together up to 62 of these beauties for up to an outstanding 992 PWM outputs.

Features:​

Dimensions (no headers or terminal block) 2.5" x 1" x 0.1" (62.5mm x 25.4mm x 3mm)
Weight (no headers or terminal block): 5.5grams
Weight (with 3x4 headers & terminal block): 9grams
This board/chip uses I2C 7-bit address between 0x60-0x80, selectable with jumpers
Terminal block for power input (or you can use the 0.1" breakouts on the side)
Reverse polarity protection on the terminal block input
Green power-good LED
3 pin connectors in groups of 4 so you can plug in 16 servos at once (Servo plugs are slightly wider than 0.1" so you can only stack 4 next to each other on 0.1" header
"Chain-able" design
A spot to place a big capacitor on the V+ line (in case you need it)
220 ohm series resistors on all the output lines to protect them, and to make driving LEDs trivial
Solder jumpers for the 6 address select pins
i2c-controlled PWM driver with a built in clock. Unlike the TLC5940 family, you do not need to continuously send it signal tying up your microcontroller, its completely free running!
It is 5V compliant, which means you can control it from a 3.3V microcontroller and still safely drive up to 6V outputs (this is good for when you want to control white or blue LEDs with 3.4+ forward voltages)
6 address select pins so you can wire up to 62 of these on a single i2c bus, a total of 992 outputs - that's a lot of servos or LEDs
Adjustable frequency PWM up to about 1.6 KHz
12-bit resolution for each output - for servos, that means about 4us resolution at 60Hz update rate
Configurable push-pull or open-drain output
Output enable pin to quickly disable all the outputs

(1)Drive board connected to Arduino:
The PWM driver board uses the I2C method, so only four lines can be connected to the Arduino device:
"Classic" Arduino pin mode:
+ 5v -> VCC
GND -> GND
Analog 4 -> SDA
Analog 5 -> SCL

Old Mega pin way:
+ 5v -> VCC
GND -> GND
Digital 20 -> SDA
Digital 21 -> SCL
R3 and later Arduino pin method (Uno, Mega &
Leonardo):
(These boards have dedicated SDA and SCL pins)
+ 5v -> VCC
GND -> GND
SDA -> SDA
SCL -> SCL
VCC pin is only for the chip power supply, if you want to connect the servo or LED lights, use the V + pin power supply, V + pin supports 3.3 ~ 6V power supply (chip safe voltage 5V). It is recommended to connect the external power supply via the power supply terminal.
(2) power supply part:
Most of the servo design voltage is 5 ~ 6V, especially in a number of steering gear at the same time running, with the need for high-power power supply. If you are directly using the Arduino 5V pin to power the servo directly, there are some unpredictable problems, so we recommend that you have a suitable external power supply for the drive board.
(3) Connect the servo:
Most servos are connected using standard 3-wire female plugs, as long as the corresponding pin into the driver board on it. (Ground wire is generally black or brown, the signal line is generally yellow or white)
(4) for the driver board assigned address:
Each drive board of the cascade needs to have a unique access address. The initial I2C address of each driver board is 0 × 40, you can modify the upper right corner of the jumper I2C address. Connect a jumper with solder to indicate a binary number "1".

1703661966766.png 1703661992611.png
 

Attachments

  • 1703660615157.png
    1703660615157.png
    414.8 KB · Views: 0
More of the stuff arrived today, though not the accelerometers (the part I really want to begin messing with).

The servo tester did, though, so I spent a couple minutes while waiting for dinner to warm up and used to verify the servos do work. (they're quieter than I expected, though they'll be louder when pullng or pushing a load)
1703814624664.png
The stepper motor kit
also arrived, though I don't think I'll be setting it up to test until the weekend. It is intended for the torso and head/neck movements that require much more power and speed than the smaller servos could provide. It has four steppers like those in the Creality 3D printers, a servo driver mounting/power board with four individual servo drivers PCBs to fit into it, and a Arduino Uno board, along with all the wires to connect these together. I don't know that any of these things will actually do the work, but they'll let me experiment cheaply, and then I can replace with correct equipment once I figure out what that is, if necessary.
1703814047806.png

The TZT PCF8575 and TCA9548A I2S "breakout" / expander boards and the TXS0108E logic level converters came too, along with a couple extra breadboards and some premade wires to use with them, and another ESP32 with devboard, and another separate devboard with the screwdown terminals. Dunno when I'll get to testing these; I'm still working on looking up codebases / libraries for the various types of things I would like to do, and articles on how to use them for the kinds of things I want to do.
aliexpress.us/item/3256806005150185.html
1703814104979.png 1703814214623.png 1703814392630.png1703814303026.png 1703814465779.png 1703814767001.png
 
I expect to have to deal with some form of neural network, so I've been watching this
but I've restarted from the beginning several times now, after getting only about half an hour in, and I still have very little idea what they are talking about. It's a bunch of math, and whenever formulas and things start showing up, even with explanations as "good" as he is giving, my brain just shorts out and turns into a small mushroom cloud. :(
 
For grins I put forward the general question of the PN to ChatGPT, and got about as useful a response as I expected, which is all things I already know. :lol:

I'll have to write up much more specific questions to see if it can give me useful help (though I suppose it's useful as-is in that it forces me to be more specific, and write down details of hardware, interconnections, structure, etc).



How do i read dozens of 3-axis accelerometers to determine limb and joint positions of a robot, as well as their velocity? i also need to read vibration data from them to determine where a user is touching the robot, and how they are touching it (tap, stroke, grab, etc), and how hard they are touching it.​




ChatGPT
Reading data from multiple 3-axis accelerometers to determine limb and joint positions, as well as velocity, involves several steps. Additionally, incorporating vibration data for touch sensing adds another layer of complexity. Below is a general guide to help you get started:

Hardware Setup:​

  1. Choose Accelerometers:
    • Select 3-axis accelerometers suitable for your application. Consider factors such as sensitivity, range, and communication interface (e.g., I2C, SPI).
  2. Mounting:
    • Attach accelerometers to the robot limbs and joints. Ensure proper alignment and secure mounting to capture accurate data.
  3. Microcontroller/Processor:
    • Choose a microcontroller or processor capable of handling the data from multiple accelerometers. Popular choices include Arduino, Raspberry Pi, or specialized microcontrollers for sensor fusion applications.
  4. Power Supply:
    • Provide a stable power supply for both the accelerometers and the microcontroller.

Sensor Data Acquisition:​

  1. Read Accelerometer Data:
    • Interface each accelerometer with your microcontroller using the appropriate communication protocol (I2C, SPI). Read raw acceleration data from each axis.
  2. Sensor Fusion:
    • Implement sensor fusion algorithms (e.g., Kalman filter, complementary filter) to combine data from multiple accelerometers and obtain accurate limb and joint positions.
  3. Velocity Calculation:
    • Derive velocity from the acceleration data using numerical integration methods (e.g., trapezoidal integration).

Touch Sensing:​

  1. Vibration Sensors:
    • Choose vibration sensors or accelerometers with a high sensitivity to detect touch events.
  2. Signal Processing:
    • Implement signal processing algorithms to analyze vibration data and determine touch events (e.g., taps, strokes, grabs).
  3. Force Sensing:
    • If your accelerometers support force sensing, extract force information from the accelerometer data to determine how hard the user is touching the robot.

Software Implementation:​

  1. Programming Environment:
    • Write code using a programming language compatible with your microcontroller (e.g., C++, Python).
  2. Data Filtering:
    • Apply filters to remove noise from accelerometer data. Low-pass filters can be useful for this purpose.
  3. Calibration:
    • Calibrate the accelerometers to account for any biases or offsets in the sensor readings.
  4. Communication:
    • Establish a communication protocol to send sensor data to a computer or another device for further processing if needed.
  5. User Interface:
    • Develop a user interface to visualize limb and joint positions, velocity, and touch events.

Testing and Calibration:​

  1. Test Rig:
    • Set up a testing rig to validate the accuracy of the limb and joint positions as well as touch sensing.
  2. Calibration Procedures:
    • Implement calibration procedures to ensure accurate and reliable sensor data.

Documentation and Optimization:​

  1. Documentation:
    • Document your hardware setup, software algorithms, and calibration procedures for future reference.
  2. Optimization:
    • Optimize your code for efficiency, especially if real-time processing is crucial for your application.

Safety Considerations:​

  1. Safety Features:
    • Implement safety features to prevent damage to the robot or injury to users in case of unexpected events or sensor errors.
  2. User Guidelines:
    • Provide guidelines for users interacting with the robot to ensure safe and meaningful interactions.
Remember, this is a high-level overview, and the actual implementation details will depend on the specific hardware and software components you choose. Additionally, consider consulting with experts in robotics, sensor fusion, and signal processing for more tailored advice based on your project's requirements.
 
Last edited:
I gave it a much more specific question that is slightly more useful in it's response, but of course it has to be checked and would only be a guide:

How do I use the MPU-6050 accelerometer to detect touch via vibration?


ChatGPT
The MPU-6050 is a 6-axis accelerometer and gyroscope combination sensor. While it's primarily designed for motion sensing, you can potentially use its accelerometer data to detect touch or vibration events. Here's a basic guide on how to get started:

Hardware Setup:​

  1. Connect the MPU-6050:
    • Connect the MPU-6050 to your microcontroller (e.g., Arduino) using I2C communication. Connect the SDA and SCL pins to the corresponding pins on your microcontroller.
  2. Power Supply:
    • Provide power to the MPU-6050, typically 3.3V or 5V, depending on the sensor's specifications.
  3. Ground Connection:
    • Connect the ground (GND) pin of the MPU-6050 to the ground of your microcontroller.

Software Setup:​

  1. Install Libraries:
    • If you're using Arduino, install the necessary libraries for the MPU-6050. You can use libraries like "Wire.h" for I2C communication and "MPU6050.h" for the MPU-6050 sensor.
  2. Read Accelerometer Data:
    • Read the raw accelerometer data from the MPU-6050. The accelerometer data includes values for the x, y, and z axes.
    • cppCopy code
      #include <Wire.h>
      #include <MPU6050.h>

      MPU6050 mpu;

      void setup() {
      Wire.begin();
      mpu.initialize();
      }

      void loop() {
      int16_t ax, ay, az;
      mpu.getAcceleration(&ax, &ay, &az);

      // Process accelerometer data here
      }
  3. Detect Vibration:
    • Analyze the accelerometer data to detect vibration. You can set a threshold for each axis and trigger an event when the acceleration exceeds this threshold.
    • cppCopy code
      const int vibrationThreshold = 1000;

      void loop() {
      int16_t ax, ay, az;
      mpu.getAcceleration(&ax, &ay, &az);

      if (abs(ax) > vibrationThreshold || abs(ay) > vibrationThreshold || abs(az) > vibrationThreshold) {
      // Vibration detected, handle touch event
      }
      }
  4. Fine-tuning:
    • Experiment with different threshold values and filtering techniques to fine-tune the touch detection based on your specific requirements.

Testing:​

  1. Serial Monitor:
    • Print the accelerometer data to the serial monitor for debugging and fine-tuning.
    • cppCopy code
      void loop() {
      int16_t ax, ay, az;
      mpu.getAcceleration(&ax, &ay, &az);

      Serial.print("Acceleration - X: ");
      Serial.print(ax);
      Serial.print(", Y: ");
      Serial.print(ay);
      Serial.print(", Z: ");
      Serial.println(az);

      if (abs(ax) > vibrationThreshold || abs(ay) > vibrationThreshold || abs(az) > vibrationThreshold) {
      // Vibration detected, handle touch event
      }
      }
  2. Physical Testing:
    • Gently tap or vibrate the MPU-6050 to simulate touch events and observe how the system responds.

Notes:​

  • This method is a basic approach and might not be suitable for all touch sensing applications. For more advanced touch sensing, you might consider using dedicated touch sensors or capacitive touch sensing methods.
  • Keep in mind that the MPU-6050 is primarily designed for motion sensing, and using it for touch sensing may have limitations in terms of sensitivity and accuracy.
  • Depending on your specific touch sensing requirements, you may need to implement additional signal processing or filtering techniques to improve the reliability of touch detection.
Remember to consult the MPU-6050 datasheet and reference materials for detailed information about its features and specifications.
 
🤗 Wow. I love this place. The Wright brothers would be proud. Anything more detailed from me is going to have to wait, as I'm not sure yet what proprioceptive actually means... But it's a beautiful project, amberwolf.
 
Proprioception just means at it's core "a sense of where one's body is and how and where it's moving", etc. Just as I am using it in the Proprioception Network post above.

What I need it to do for this projects purposes is defined in that PN posts, but how to do it, software-wise, I don't yet know. I know what the hardware itself has to do, what data it has to gather, etc., just not exactly how to process that data within the software to keep track of where all the robot body parts are relative to each other and what speed they're moving at, relative to each other, and thus where any "seismically" detected touch is located on the surface of the robot. (or how to filter the "seismic" data to actually detect that a touch or impact (like of paw into someone's face :lol: ) has happened, and what kind of touch/etc it is.





Those MPU6050 accelerometers haven't arrived yet because apparently they've taken a detour...they started in Whittier, California on night of the 26th, then reached the Phoenix regional USPS processing center the morning of the next day, whereupon they went to the local USPS PO near me in the wee hours of today, and I'd then expect them to be out for delivery today or tomorrow. Instead, they apparently went from there to flagstaff in only an hour and fifteen minutes (which I think is physically impossible), and were then processed again there a few hours later this morning.

No updates since then, but I suspect I'll never see them and whatever else is in that package. :(
 
I get it.

You've just given me an idéa I'll do some textile work very soon...

AI helps you learn faster. The way for us humans to save time so we can compete against the aliens.

I just got some Teksta dogs and they have (I guess) all these sensors and a bone as a "remote" to get the dog do some moves. It has 3 different settings. Demo, Auto and Manual. Auto should be using the sensors from the eyes when you move your hand in some ways it can se the pattern to do 3-4 different actions, by see I guess the sensor measure time and interactions given by the hand action and as such gets a uniqe signal. One action is jump and doing a 360*. I haven't seen it or tried it just yet. I guess I need to train my robo dogs first or at least learn how to get them following my orders. Joking*.

Teksta robot dog

10 year old technology but it works. Sounds like a puppy. Pretty good tech.
 
Yeah, over the years I've looked at literally every robot dog or animal project that has ever been done that information has been put on the internet about. Absolutely none of them do even remotely what I need to do, and none of them behaves like a dog. :(

Some of them behave kind of like how people that don't have dogs expect dogs to behave....


Some of them are cute or even useful in some way, but not for any of my purposes. ;)



The good news is that the accelerometers did actually arrive (and the tracking info has changed from what it showed yesterday, now it skips the arrival in flagstaff that was too fast to be physically possible).



The bad news is it sounds like I'm in the middle of a warzone with all the mortar-round and other large fireworks, actual guns being fired in the air (which despite all the gunshot sensors in the area, the police never do anything about), etc., so even with everything closed up and foam blocks in the windows and a box fan on high and headphones on, I'm too stressed to concentrate on working with any of these parts; I'd screw something up and just blow things up trying to wire them.

Thankfully JellyBeanThePerfectlyNormalSchmoo doesn't yet care about it (but it will get bad enough tomorrow night to affect even her); if Yogi and Kirin were still around I wouldn't be able to type as they'd both be on top of me right now trying to hide under me (and I wouldn't blame them a bit).



So actually working with them will have to wait till it's quieter, which might be a couple of weeks if it's like some years have been, or I could get lucky and they'll stop by morning so I can sleep a bit, and then if they don't start up again before then I'll see if I can breadboard up a test system and poke at it with some test code.

I didn't get any displays to go with any of these MCUs as I won't need one for the actual project...but I should've gotten one for development and testing purposes. For now, I think I still have the old OLED I changed out from the Grin Satiator; it's small but IIRC it's either I2S or I2C interface so I could probably use it, but I'd have to solder wires to it as the cable is faulty (why I had to replace it)
Satiator OLED Repair Kit
I know I have some old printer tester thing from my ancient PC-repair days that has a fairly large (if low-res) LCD in it that may be usable, and I have some other devices I don't use that may have screens I could use, if any of them are easy to adapt and use.

If not, I'll just order a display I can use for development and testing that can show me graphically what is happening, etc. until I get to the point of figuring out how to display the data on a PC screen remotely.
 
Last edited:
Some thoughts on training an AI, and "level saving" (kind of like saved games, but not):

I am still learning how these things work, so my poor understanding may mean this is not yet possible.

If an AI "learns" by weighting "neurons" responses towards desired patterns of output, then "engrams" should be saveable and imprintable by saving or adding those weights for the neurons responsible for specific input/output chains.

Rather than just writing over existing weights to replace a whole set of behaviors, a specific behavior should be imprintable by adding the weights to the appropriate neurons. A specific behavior should be extractable (removable) by subtracting the weights from those.

It wouldn't be perfect, since weights might have been added from other learnings, and this may affect those, and vice-versa. So you could not use this to "construct" a specific exact "personality" or group of behaviors, but it could be used, once a behavior has been trained, to read that out and save it for later imprinting, etc. Or if weights were already known for how to create some specific behavior, they could be written in this way.

I don't yet know how to do any of this, but "it's just math". :lol:
 
We had a german shepheard mix coolie and I don't remember if she was so so afraid of fireworks but we where' up and about and also howling together with the dog.

Then you know all about it. I have no way of knowing it all but only one note was a basic handmade wolf in our local library. And the sense it gave the atmosphere was that it did fill out the space and it feel nice to look at and touch the softness. It gave feelings still it looked a little frightening.

There should be automatic ai- learner type of software that co-ordinates the movement of legs and such so to the degree that the robot then can walk and "make" most of the days walk without falling down and getting stuck. (it kind of work in the way you wrote it down as our brain in a sense. Missing links gets destroyed and or cut short as they are not used or "effective", this learner type of programs have existed for some time before ai breakout of this year, 2023... going into 2024 :D )

These take a lot of time to make and in a sense it is automatic. However sometimes they need to change the hardware as it is to weak to move the weight or leg in the way it is supposed to and a such fail in that sense rather than the brainy department.

Until small and round computers or the arm 32 bit small chips and stuff that drives sensors and more gets easier to learn I'll stay out if it. My drive and ambition inside me have to just explode with energy available for me to start any research in that area.

I think knowledge learning is exponential so the more you learn you can learn very fast and easily. Also to have a wall or person to talk to with your ideas get's thing going faster forward as well.
 
I found the old printer tester display, apparently it is super common
and has arduino libraries available for it, like this
and others; the linked one also has a GPIO library
that will probably be useful for other things.

EDIT add

It seems a bit complicated to use this display (most arduino threads about it end in nothing usable), and it's very low-res and just monochrome, so rather than wasting a lot of time on it I'll probably just go find an easy-to-use color display, or else some code that can just display via a PC-based program on my regular monitor (which would be better for my purposes anyway).

20231231_195506.jpg 20231231_195454.jpg

The sonic assault of explosives and gunfire has unfortunately restarted a few hours ago, before I found that LCD, so I can't really concentrate on experimenting with the sensors and stuff; I"m afraid I'll wire something wrong and blow it up--I waited too long before starting work on it today and didn't do that when I still could have. :( Hard enough to concentrate on just typing this up with all the booms. (there is more about the problems in the "end of the world" thread)


We had a german shepheard mix coolie and I don't remember if she was so so afraid of fireworks but we where' up and about and also howling together with the dog.
IIRC the first dog we rescued from the pound after moving to Phoenix was Lucky, a GSD-Collie mix; one of the goofiest looking dogs, with the long collie face and mane, but totally GSD coloring. He always wanted to save my mom from "drowning" in the bathtub by taking her wrist and trying to get her to get out, so the door always had to be closed...and he'd sit outside the room against the door mournfully being dramatic to anyone that walked by that it's not his fault if she drowns in there.... :roll: :lol:



Then you know all about it.
All about what? I wish I did know all about something...preferably something useful that would make me a bunch of money to spend on my projects. :lol:

I have no way of knowing it all but only one note was a basic handmade wolf in our local library. And the sense it gave the atmosphere was that it did fill out the space and it feel nice to look at and touch the softness. It gave feelings still it looked a little frightening.
I'm guessing you mean a taxidermied wolf? (tanned skin fitted over a form to make it look like it was alive)

This project has to go a fair bit further than that, but snuggling for comfort and reducing anxiety is a primary purpose (hence the project name).

Wolf, rather than dog, because I felt an affinity for wolves even before we ever got dogs (I think I was in kindergarten? 1st grade?) and I was fascinated by them whenever I saw them. But wolves are not dogs and are not pets; they weren't bred to be our constant companions like dogs were...I've had one wolfdog (Lady) and she was VERY different in behaviors from any dog I've ever had or met (except other wolfdogs, and not even all of those); much more like the way wild wolves act in the many videos and films of them I've seen. As I have learned about the various kinds of wolves, I've found a particular affinity for the Mexican Wolf. There is just something about them...and they look more delicate, perhaps not quite "cute" but...

So, the project is Snuggles the *Wolf* because if it were possible (and not bad for the wolves since I'm a city person by necessity) I'd have those as companions rather than dogs...but since it's not, I'll settle for a "fake" one. ;)

Once the project is "complete" enough, it could be used for any animal companion type--not even limited to dogs, if the behavior sets can be swapped out, and whoever needs a companion but can't have a live one could have whatever they want to build.

But mine will be a wolf. :)



There should be automatic ai- learner type of software that co-ordinates the movement of legs and such so to the degree that the robot then can walk and "make" most of the days walk without falling down and getting stuck. (it kind of work in the way you wrote it down as our brain in a sense. Missing links gets destroyed and or cut short as they are not used or "effective", this learner type of programs have existed for some time before ai breakout of this year, 2023... going into 2024 :D )

Yes, some form of neuronal AI is almost certainly going to have to be used, especially since I want it to be able to learn, even though it can't innovate it's own behaviors, new ones may crop up in the process of modifying existing ones, and a "personality" of a sort could emerge for each system that makes it different from the others, in a vaguely similar way to how most dogs are a lot like each other but no two, even siblings that grew up together, are quite the same (and some are radically different!).

Thankfully I do not intend to make this project walk. That is a whole other can'o'worms, and maybe someday it would be interesting to pursue. All it has to do is be able to sit up, lay down, roll over, sit up again, wave it's paws around, move it's head around, open and close it's mouth, and pose it's ears.

I'd *love* to have a mobile expressive face, but that is also a whole other can'o'worms, and is likely to prevent a snuggleable head, which would make it much less useful to me. (one off the most comforting things when I am too stressed out is to pull the whole wolfy up over me on it's side, with it's heavy weighted but still kinda squishable head up on my chest under my chin, and just hug it...if the head was all hard motors and plates for moving the face around that wouldn't really be that comforting).

These take a lot of time to make and in a sense it is automatic. However sometimes they need to change the hardware as it is to weak to move the weight or leg in the way it is supposed to and a such fail in that sense rather than the brainy department.
I'm probably going to need more powerful motors than what I got to experiment with, but I picked cheap stuff so I can blow it up messing around without losing too much investment. Once I know what I am really getting into and how it will have to work, I can then get the appropriate parts. If I'm lucky, what I already have will at least work for some things (for instance, I'm pretty sure I can move the tail and ears and jaw with the servos I got, and possibly the paws if not the legs. The stepper motors will probably at least move the head around, if not the torso. )

Until small and round computers or the arm 32 bit small chips and stuff that drives sensors and more gets easier to learn I'll stay out if it. My drive and ambition inside me have to just explode with energy available for me to start any research in that area.

These things are actually easy to learn for most poeple (but I'm not normal and have trouble learning stuff like this, it takes a long time for my brain to process it and it doesn't work on some things, especially mathy stuff). If I can "intuitively" grasp it and work with it that way, then I can use it. If not, it's very difficult for me to do stuff.

I think the first programming I ever did was in BASIC on a Vic20, predecessor to the Commodore64; about the same time we were learning that on the Apple][ when those were in schools all over the place. But I didn't progress very far with that. Several years later in DeVry, while they were teaching us electronics hardware repair, I learned something about direct machine coding with the 6500 MCUs, and have tried to learn other things in the decades since then but have never been able to get far enough to do anything useful to me with it.

So I have a basic understanding of how this stuff "works", just have a hard time applying what I know, and figuring out all the rest of it.

AI stuff is even worse--it's pretty much all math from what I can see. Been trying to watch some videos on it since reading about it hasn't helped, and the videos I just keep having to restart watchign over and over trying to see where they are going with the explanations, as soon as the math bits start I get lost.





I think knowledge learning is exponential so the more you learn you can learn very fast and easily. Also to have a wall or person to talk to with your ideas get's thing going faster forward as well.
You're right about that. My biggest problem with this project over the years (couple of decades or more?) has been finding people that I could at least discuss it with, without being laughed out of the room.

Or having people tell me I shouldn't bother and should just use one of the existing dogbots and stick it inside a big wolf plusie or something, because they simply can't understand the difference between those and what I need. :/ I can't express in words how many times I've had *that* "discussion" and had to just walk away because the people were worse than useless, actively unhelpful. (so hopefully you'll pardon my reaction whenever anyone brings other dogbots up....).


But people laughing me out of rooms hasn't stopped me from other things, like making music (or other art when I had time for it; I just don't have enough time or energy for most of teh things I "need" to do, much less just want to. So much stuff in my brain I can never let out; just no time to. If I didn't have to work to make a living and buy the stuff to make the projects...and had another ten or twenty decades to work on it all... :lol:

So...I plod along, doing a bit here and a bit there, and sometimes end up with something vaguely like what I was after.

This project is the first time that nearly everything I need to take something (that is very complex) out of my head and into reality actually exists in a form I can afford and either directly use, or modify for it, including all the knowledge out there from the others making projects that use bits of stuff I can "glue together". The glue, however...that's the issue, as it's almost all coding that I am still learning (and may never fully comprehend).
 
Last edited:
Some thoughts on training an AI, and "level saving" (kind of like saved games, but not):

I am still learning how these things work, so my poor understanding may mean this is not yet possible.

If an AI "learns" by weighting "neurons" responses towards desired patterns of output, then "engrams" should be saveable and imprintable by saving or adding those weights for the neurons responsible for specific input/output chains.

Rather than just writing over existing weights to replace a whole set of behaviors, a specific behavior should be imprintable by adding the weights to the appropriate neurons. A specific behavior should be extractable (removable) by subtracting the weights from those.

It wouldn't be perfect, since weights might have been added from other learnings, and this may affect those, and vice-versa. So you could not use this to "construct" a specific exact "personality" or group of behaviors, but it could be used, once a behavior has been trained, to read that out and save it for later imprinting, etc. Or if weights were already known for how to create some specific behavior, they could be written in this way.

I don't yet know how to do any of this, but "it's just math". :lol:

Disclaimer: I am not up to speed on this entire thread.

Yes, what you're describing as "engrams" maps -- conceptually, I mean -- to neural network weights. The problem is that you can't just add weights to a network, they have to be attached to the structure of the network. I mean a weight is just a coefficient basically, like in x = 2a, the 2 (or the "a" if you hold that value fixed, technically) would be an individual weight. But without changing the equation to x = 2a + 2b, where would you put another weight?

Another issue is that when you train an entire network model at the same time, all the weights end up doing their little part of the job. Without all or most of the other little pieces (i.e. without all the weights for the entire model), that little piece ends up not really doing the same job. It's like trying to build a 2nd floor on to your house: you can't really do it without a foundation. I mean you'd get something sorta like a 2nd floor, but it wouldn't be what you were trying to do, and it would be on the ground. ... wow, that's a terrible analogy.

It's more like making a cake. If you only use some of the ingredients, you don't really get anything that you want out of the process. You need all the ingredients and all the steps; only a few of them don't really get you what you want. There, that's a much better analogy.

Transfer learning is what you're talking about doing, if you want to know what to Google, by the way :)
 
Ah, thank you for the clarifications.

I'm sure my conceptions below are flawed at best, but since me and math don't really work together very well, I'll have to try some different thinking to see if I can figure out how things really work, and then how they could be made to work:

It sounds like the weights and math just need to be constructed differently so that the variables needed are already always there, just "empty" (equal to 1 for multipliers, zero for adders?). Then just insert or remove the values, rather than the entire variable and operator. I have no idea if things could possibly work this way, but there should be *some* way to do this sort of thing, even if the implementations presently used don't do it.

Regarding how the weights work within the system, I was kind of thinking they might work (or could be able to work) like...Hmm. Analogies *are* hard. How about: a flexible surface (rubber sheet?) that a set of "decision balls" have to pass across and end up in one of a number of possible outputs, starting from one or more of a number of possible inputs. Different weights at different decision points scattered all over the surface attract (positive weights) or deflect (negative weights) the path to cause a tendency for them to pass more to specific end points from some starting points, and more to other different end points from other starting points.

Are either of those even remotely realistic? :oops:



But...I'm not so worried about Transfer Learning yet; I'd rather get the sensor system going (to detect and localize input via the accelerometers' data with inertial tracking) and then worry about improving it. :lol:


I got one of the ESP32s (WROOM version, was the first one on top in the box) on a breadboard with one of the MEMS microphones, the temperature sensor, and an acclerometer. Now I am working on getting the right libraries for each of these installed in the Arduino IDE, and then figuring out example code to read them and do something really simple with the data to show me it's working, like light an LED or change it's color or make a sound and change it's pitch or whatever based on the change in data. Then I'll at least know things physically function, and from there I can try to figure out how to process the data from two sensors to get relative position, speed, etc.
 
Last edited:
Back
Top