Rough draft of a 6FET power stage intended to be part of a larger controller

ARod1993

100 W
Joined
Feb 24, 2014
Messages
249
Location
Cambridge, MA
I couldn't sleep and so I took a whack at designing a power board for a fairly large motor controller. The board itself is a 6FET power board; the idea would be to parallel several of them to create a 150V, 400- to 600A-capable controller. The board takes in a differential PWM signal and isolated 15V rail for each phase, as well as non-isolated 5V and 15V rails for control logic and low-side drive, and uses these isolated half-bridge drivers to drive each transistor pair. I went with the HSOF-8 package for the transistors (also known as TO-leadless) mostly out of curiosity, but primarily because the AOTL66518 caught my eye; it claims 214A at 150V, which is likely fairly optimistic. That said, 150V at 100A per board isn't a terribly bad deal at $7 per FET, and these look like they can switch fast (back of the envelope says 115nC gate charge with a 4A gate driver is ~30ns, so switching losses at 100kHz (200kHz center-aligned) would only be about 40W per device (add in the conduction losses and you're at about 80W per device, which is 95% efficient, and 100kHz is fast enough to drive controller-eating motors).

The power pass is designed for fairly low inductance; the positive and negative buses are on the top and bottom of the board, with interlocking comb teeth to each transistor. The phase leads are intended to attach via busbar perpendicular to the main power bus (the two holes between each transistor pair are the mounting points for that); in a multistage controller the boards would be placed next to each other and a single busbar would connect to all the boards on each phase lead. For bus capacitance I went with 1640uF bulk electrolytic, and then 2.2uF ceramic very close to the ground pad of each FET; I'd love comments and critique of the layout! Here's a few shots of the 3D board view and the layout:

pUsHXUS.png

c7UC1xF.png

zwMa6TQ.png
 
If you want to experiment with high current, I'd strongly recommend switching to a gate driver design with safety features such as a miller clamp. There are some which have both a high and low side driver, but there are more options in single driver per package design.

It is difficult to remove heat from a PCB and PCBs don't like to dissipate heat. I've done quite a bit of experimenting to figure out what I can get away with in using PCBs to carry high currents and it's still a bit inconclusive.

Here is a 2 phase boost converter I designed which was about 97% efficient, input was around 50A DC, PCB was 4 layers 1oz outer, 0.5oz inner. The total power dissipation is about 18W with ~8W lost to PCB resistance.

flir_20210529T210033.jpg
flir_20210529T205629.jpg

This is a 6 layer 3oz per layer PCB fed at 100A DC, so 9oz on positive and 9oz on negative
flir_20210222T163720.jpg

If you are planning to parallel devices, you should only use one gate driver per parallel group, multiple gate drivers will not trigger at the same time and will cause a current imbalance. All switched devices should trigger from a single gate drive.

It's best to avoid vias in gate drive traces as they add inductance to a critical high, high dI/dt current signal path.

If you aren't experienced with limitations of paralleling and high current, start simple, succeed there, then build on that experience. It's difficult enough to get a decent design which doesn't self destruct under real world conditions, let alone a design which is switching hundreds of amps among many parallel devices in less than 100ns at 100kHz. Very few motors would need > 50kHz switching frequency.

Don't forget, if high current was easy, we'd have a bunch of high current controllers to choose from... but it's not easy.
 
Where can you actually buy that aot66518 FET? It looks great.

Various ones of us have discovered time and again the hard way that datasheet ratings need dividing by 2 approximately even with really good cooling.
 
mxlemming said:
Where can you actually buy that aot66518 FET? It looks great.

Various ones of us have discovered time and again the hard way that datasheet ratings need dividing by 2 approximately even with really good cooling.

They're right on DigiKey: https://www.digikey.com/en/products/detail/alpha-omega-semiconductor-inc/AOTL66518/12823097

The datasheet says 150V 214A; I'm assuming that I can get away with 150V 100A if I use copper busbar to pull the current out of the center node and to buttress the current handling capabilities of the power planes on the right side. I basically have the two power planes on the right on top each other (V+ on top, V- on the bottom) except for the spot where the MOSFETs actually are in an effort to minimize parasitic inductance on the bus. I may wind up figuring out how to add heat pipes to the backside of the board to help pull heat off the FETs if the copper busbar isn't able to sink the heat effectively.

zombiess said:
If you want to experiment with high current, I'd strongly recommend switching to a gate driver design with safety features such as a miller clamp. There are some which have both a high and low side driver, but there are more options in single driver per package design.

It is difficult to remove heat from a PCB and PCBs don't like to dissipate heat. I've done quite a bit of experimenting to figure out what I can get away with in using PCBs to carry high currents and it's still a bit inconclusive.

Here is a 2 phase boost converter I designed which was about 97% efficient, input was around 50A DC, PCB was 4 layers 1oz outer, 0.5oz inner. The total power dissipation is about 18W with ~8W lost to PCB resistance.

View attachment 1


Why such a discrepancy between the two designs? I dunno, it needs further investigation.

This is a 6 layer 3oz per layer PCB fed at 100A DC, so 9oz on positive and 9oz on negative
View attachment 2

It's best to avoid vias in gate drive traces as they add inductance to a critical high, high dI/dt current signal path.
I'm thinking of switching from the ADuM4221 to the ADuM4135 for gate drive in order to get more safety (Miller clamp and desat detection internal to the device); it's more expensive because there's one per FET, but that also lets me put the drive pins right on top of the FET gate when I do the next cut of the layout.

zombiess said:
If you are planning to parallel devices, you should only use one gate driver per parallel group, multiple gate drivers will trigger at the same time and will cause a current imbalance. All switched devices should trigger from a single gate drive.

Question; how do you run a bunch of FETs off a single drive without it massively slowing down the switching transitions and burning a ton of energy in the process? Most of the chips I've seen are 4-6A drivers, which can switch a nice modern high-power MOSFET in about 20ns; in a 24-36FET controller that turn-on time is going to wind up in the 100+ns range if you run 4-6 FETs off a single driver. I pulled the Sevcon photo dump, but I can't quite see how they did that.

zombiess said:
If you aren't experienced with limitations of paralleling and high current, start simple, succeed there, then build on that experience. It's difficult enough to get a decent design which doesn't self destruct under real world conditions, let alone a design which is switching hundreds of amps among many parallel devices in less than 100ns at 100kHz. Very few motors would need > 50kHz switching frequency.

Don't forget, if high current was easy, we'd have a bunch of high current controllers to choose from... but it's not easy.

I have some experience doing stupidly high-power electronics in small boxes (325kW interleaved rack-mount converter (50kHz on each leaf) at my last job); I haven't tried to lay out high-power stuff before though. The layout I have above is basically the result of me reading through a fair number of the old motor controller design threads and trying to avoid the things that you, HighHopes, MxLemming, etc. call out as bad practice.
 
Thanks for that. Sufficiently new stock they didn't show up on octopart. Might snaffle up a few.

These FETs don't really switch much slower with lower current. I've got some with 1A per fet (total 12 ohm at 12V) on ipt015n10 and they switch the same speed as when i give them 2.5A (4.7 ohm at 13V).

This is because the reverse transfer capacitance is ultra low so the feedback as the switch node changes is tiny. FETs switch slowly because there's a negative feedback loop between the drain voltage and gate voltage. With a 6A driver you can easily drive 3 of these, 4-6 is probably possible if you test and check.

Now about your layout... I would rotate the FETs so that drain of the high side and source of the low side are right next to each other, then put a decoupling capacitor right on them, in this way your total parasitic inductance can drop to about 1nH and is close to negligible.
 
ARod1993 said:
Question; how do you run a bunch of FETs off a single drive without it massively slowing down the switching transitions and burning a ton of energy in the process? Most of the chips I've seen are 4-6A drivers, which can switch a nice modern high-power MOSFET in about 20ns; in a 24-36FET controller that turn-on time is going to wind up in the 100+ns range if you run 4-6 FETs off a single driver.

I have some questions:
What percentage do switching losses make up of your total losses?
How slow can you switch the device and still obtain acceptable results?
What is the minimum switching frequency you can utilize to drive your load?
How much DC Link ripple current do you need to drive your desired load and at what switching freq?

My experience with higher currents has always lead me to switch slower to minimize inductance effects, but my primary concern is reliability. I'll usually end up around 200-300ns with 3 parallel IRFP4568's.

If you haven't tried paralleling and driving multiple devices, I suggest running some controlled pulse tests at target current on a test PCB before you get to far. It's a very enlightening experience.
 
zombiess said:
I have some questions:
What percentage do switching losses make up of your total losses?

On the 6-FET board above, at 100kHz, if I use this datasheet to derive an approximate switching time, the AOTL66518 would have a switching time of 12ns or so at 4A, and so the overall switching loss would be about 8.8W per device at 100kHz, while conduction losses assuming datasheet Rdson would be about 43W. At 1A each, the switching time would increase to about 50ns, and the switching losses per device rise to about 32W, which winds up increasing total losses from 52W to 75W per device.

zombiess said:
How slow can you switch the device and still obtain acceptable results?

What is the minimum switching frequency you can utilize to drive your load?
How much DC Link ripple current do you need to drive your desired load and at what switching freq?
I'm honestly not sure; shooting for a design that can comfortably drive a 5-10uH motor while keeping current ripple down to around 5% or so of nominal current flow.

zombiess said:
My experience with higher currents has always lead me to switch slower to minimize inductance effects, but my primary concern is reliability. I'll usually end up around 200-300ns with 3 parallel IRFP4568's.

If you haven't tried paralleling and driving multiple devices, I suggest running some controlled pulse tests at target current on a test PCB before you get to far. It's a very enlightening experience.

That should be interesting; I've never done that before. The work I did at high power used those big FET/IGBT bricks, so we needed only one of them
 
ARod1993 said:
zombiess said:
I have some questions:
What percentage do switching losses make up of your total losses?

On the 6-FET board above, at 100kHz, if I use this datasheet to derive an approximate switching time, the AOTL66518 would have a switching time of 12ns or so at 4A, and so the overall switching loss would be about 8.8W per device at 100kHz, while conduction losses assuming datasheet Rdson would be about 43W. At 1A each, the switching time would increase to about 50ns, and the switching losses per device rise to about 32W, which winds up increasing total losses from 52W to 75W per device.

zombiess said:
How slow can you switch the device and still obtain acceptable results?

What is the minimum switching frequency you can utilize to drive your load?
How much DC Link ripple current do you need to drive your desired load and at what switching freq?
I'm honestly not sure; shooting for a design that can comfortably drive a 5-10uH motor while keeping current ripple down to around 5% or so of nominal current flow.

zombiess said:
My experience with higher currents has always lead me to switch slower to minimize inductance effects, but my primary concern is reliability. I'll usually end up around 200-300ns with 3 parallel IRFP4568's.

If you haven't tried paralleling and driving multiple devices, I suggest running some controlled pulse tests at target current on a test PCB before you get to far. It's a very enlightening experience.

That should be interesting; I've never done that before. The work I did at high power used those big FET/IGBT bricks, so we needed only one of them

From extensive experience with these kind of FETs...
12ns is pie in the sky, you will get 30-40 in reality. However at this speed, the output capacitance starts to become important (charging it with the currents being switched) and the line of what is switching loss in the FET and what is capacitor charging gets difficult to assess.

50ns with a 1A drive is about reality on my boards. The time constant on the gate is 3* this.

You can switch these FETs at 50kHz centre aligned, 100kHz switching no problem. Faster that this I have not tried, but your dead time requirements will become substantial (dead time != switching time to be clear!)

Your previous pictured layout will not play nicely with this switching speed and current, but there isn't too much to do to make it functional.

You can run a 5uH motor with a 40V rail at 15kHz (I have this for a large VESC based controller I have on my desk) and it isn't disastrous for sure. 50kHz PWM 100kHz switching is comfortable.

You definitively do not need miller clamp for these FETs. It is not meaningful to do so, since the miller capacitance is tiny, and you will not experience parasitic turn on. You may experience parasitic turn on from poor gate trace layout, but a miller clamp WILL NOT do anything to help this, since it is driving through the same trace and the resistances you are talking about using to get 1-4A drive mean the miller clamp will not pull any meaningfully higher current.

Desat protection is technically not meaningful for this (MOSFET does not desat, that is an IGBT phenomena), but as Peter's pointed out, it can act as a very nice short circuit detection. However, you will have to find a way of substantially modifying the 9V threshold of the AD part you chose.

Switching fast requires very specific and low inductance layout, you have to either get the layout really good, or switch slower. There are also EMC considerations, as a hobbyist you may choose to ignore them, if selling you are kind of obliged not to.
 
mxlemming said:
ARod1993 said:
zombiess said:
I have some questions:
What percentage do switching losses make up of your total losses?

On the 6-FET board above, at 100kHz, if I use this datasheet to derive an approximate switching time, the AOTL66518 would have a switching time of 12ns or so at 4A, and so the overall switching loss would be about 8.8W per device at 100kHz, while conduction losses assuming datasheet Rdson would be about 43W. At 1A each, the switching time would increase to about 50ns, and the switching losses per device rise to about 32W, which winds up increasing total losses from 52W to 75W per device.

zombiess said:
How slow can you switch the device and still obtain acceptable results?

What is the minimum switching frequency you can utilize to drive your load?
How much DC Link ripple current do you need to drive your desired load and at what switching freq?
I'm honestly not sure; shooting for a design that can comfortably drive a 5-10uH motor while keeping current ripple down to around 5% or so of nominal current flow.

zombiess said:
My experience with higher currents has always lead me to switch slower to minimize inductance effects, but my primary concern is reliability. I'll usually end up around 200-300ns with 3 parallel IRFP4568's.

If you haven't tried paralleling and driving multiple devices, I suggest running some controlled pulse tests at target current on a test PCB before you get to far. It's a very enlightening experience.

That should be interesting; I've never done that before. The work I did at high power used those big FET/IGBT bricks, so we needed only one of them

From extensive experience with these kind of FETs...
12ns is pie in the sky, you will get 30-40 in reality. However at this speed, the output capacitance starts to become important (charging it with the currents being switched) and the line of what is switching loss in the FET and what is capacitor charging gets difficult to assess.

50ns with a 1A drive is about reality on my boards. The time constant on the gate is 3* this.

You can switch these FETs at 50kHz centre aligned, 100kHz switching no problem. Faster that this I have not tried, but your dead time requirements will become substantial (dead time != switching time to be clear!)

Your previous pictured layout will not play nicely with this switching speed and current, but there isn't too much to do to make it functional.

You can run a 5uH motor with a 40V rail at 15kHz (I have this for a large VESC based controller I have on my desk) and it isn't disastrous for sure. 50kHz PWM 100kHz switching is comfortable.

You definitively do not need miller clamp for these FETs. It is not meaningful to do so, since the miller capacitance is tiny, and you will not experience parasitic turn on. You may experience parasitic turn on from poor gate trace layout, but a miller clamp WILL NOT do anything to help this, since it is driving through the same trace and the resistances you are talking about using to get 1-4A drive mean the miller clamp will not pull any meaningfully higher current.

Desat protection is technically not meaningful for this (MOSFET does not desat, that is an IGBT phenomena), but as Peter's pointed out, it can act as a very nice short circuit detection. However, you will have to find a way of substantially modifying the 9V threshold of the AD part you chose.

Switching fast requires very specific and low inductance layout, you have to either get the layout really good, or switch slower. There are also EMC considerations, as a hobbyist you may choose to ignore them, if selling you are kind of obliged not to.

Thanks for the advice! If you don't mind my asking, what changes would I need to make the layout functional for 100A at 50-100kHz?
 
You need to consider: when you switch, and the current changes from flowing
From high side into the phase to
Low side into the phase
and vice versa,
How big is the change in that path? Draw the lines from ground into the phase and you'll see your current design encloses about 6cm². Your primary goal is to minimize this.

You need to follow it all the way up stream towards the battery/PSU until you reach a big and fast enough decoupling capacitor.

You can make other changes that will really help like putting a ground plane on the layer closest to the conductive traces (the yellow in 1 layer). This dramatically reduces the inductance of the traces.
https://spok.ca/index.php/resources/tools/106-traceindcalc
 
Would you both share the benefits you see in switching really fast? What does switching in 50ns vs 500ns really mean to your target application? Are you targeting continuous use (years of non stop running) or something more along the lines of use in a personal electric vehicle? How does 60W of losses at 50ns vs 600W at 500ns matter in your application? Have you been through the math to equate it to thermal management requirements?

My #1 concern is reliability, because it can't be used it if it blows up and I put a very high value on my time. I do push the limits, but only after I know what I have is reliable. Blown controllers designed by professional power electronics engineers have also been known to take out battery packs as well. Chances are if you want to run high current, you'll have a decent size battery pack and that presents a safety issue IMO.
 
mxlemming said:
You need to consider: when you switch, and the current changes from flowing
From high side into the phase to
Low side into the phase
and vice versa,
How big is the change in that path? Draw the lines from ground into the phase and you'll see your current design encloses about 6cm². Your primary goal is to minimize this.

You need to follow it all the way up stream towards the battery/PSU until you reach a big and fast enough decoupling capacitor.

You can make other changes that will really help like putting a ground plane on the layer closest to the conductive traces (the yellow in 1 layer). This dramatically reduces the inductance of the traces.
https://spok.ca/index.php/resources/tools/106-traceindcalc

Thanks! I'm going to try to take a second cut at it over the week next week and will post the new layout up here hopefully by sometime next weekend :)

zombiess said:
Would you both share the benefits you see in switching really fast? What does switching in 50ns vs 500ns really mean to your target application? Are you targeting continuous use (years of non stop running) or something more along the lines of use in a personal electric vehicle? How does 60W of losses at 50ns vs 600W at 500ns matter in your application? Have you been through the math to equate it to thermal management requirements?

My #1 concern is reliability, because it can't be used it if it blows up and I put a very high value on my time. I do push the limits, but only after I know what I have is reliable. Blown controllers designed by professional power electronics engineers have also been known to take out battery packs as well. Chances are if you want to run high current, you'll have a decent size battery pack and that presents a safety issue IMO.

I look to fast switching as a way to maximize controller efficiency, and at reasonably high switching frequencies like the ones I'm used to from the power converter world (50kHz and up) and on the devices I used to work with where I was switching losses dominate conduction losses by a wide margin; one converter design I looked at had maybe 10C rise due to conduction losses in initial analysis and 30C plus from switching losses. I'm trying to hit 95-98% system efficiency, both because I don't want to deal with any more thermal management than I absolutely have to and because I want my battery pack to last me as long as possible. I'd like to eventually get an initial run of a controller, BMS, and vehicle electronics set that I've designed myself, and then stick that on a personal EV to work the kinks out to a point where what I have is sellable.
 
zombiess said:
Would you both share the benefits you see in switching really fast? What does switching in 50ns vs 500ns really mean to your target application? Are you targeting continuous use (years of non stop running) or something more along the lines of use in a personal electric vehicle? How does 60W of losses at 50ns vs 600W at 500ns matter in your application? Have you been through the math to equate it to thermal management requirements?

My #1 concern is reliability, because it can't be used it if it blows up and I put a very high value on my time. I do push the limits, but only after I know what I have is reliable. Blown controllers designed by professional power electronics engineers have also been known to take out battery packs as well. Chances are if you want to run high current, you'll have a decent size battery pack and that presents a safety issue IMO.
I think you're on a misunderstanding here.

Fast switching doesn't damage things, voltage spikes and current hot spots do.

One of the easiest ways to kill a mosfet is to make it switch slowly. By doing so, you open parts of the junction before other parts and those bits take the whole of the energy dissipation. Try it... Switch 100A over 10us and it will probably blow after a few cycles. I watched a colleague destroy a tube of 20 MOSFETs trying to achieve a soft start by slow switching, he adamantly insisted he just needed to control the profile of the turn on while ignoring the half joule of energy being disippated in the junction in 50us. In the end, it worked by putting a resistive element and two fast switching MOSFETs, with one switching the resistance in and the other sitting out the resistance a short time after

You've been working with VESC for a long time, and we need to be clear that VESC firmware can and does do things that can instantly kill a controller - most commonly shift the PWM orientation in such a way that it's not in phase with the BEMF and it then generates a massive kickback onto the bus. It's important that you do not conflate these issues with issues like fast switching causing Miller turn on. We should also not conflate fast switching issues with wire disconnections, shorts to Vbus/ground/phase to phase.

All the big manufacturers are using fast switching now, tech notes from Infineon, ST, TI... All have it.

They're targeting hyper fast switching with GaN, early days but they're claiming good reliability.

There are loads of car inverters using fast switching, this requires high reliability and safety. Infineon released their BSG kit a few years ago, and have evaluated and published results in appnotes with switching waveforms.

Arrod can achieve his goal. There's nothing unreasonable about it.
 
To put it very simply, several years now of medical device industry has taught me one resounding thing.

If you want an easy life, do not protect against failure modes. Eliminate the failure modes.

Only introduce protection where the failure modes are outside your control, and you absolutely cannot eliminate them.
 
mxlemming said:
I think you're on a misunderstanding here.

Fast switching doesn't damage things, voltage spikes and current hot spots do.

I don't switch fast or slow, I only switch as fast as is needed after examining the entire system. I'm advocating for a less aggressive approach to increase probability of success and faster, usable results. It's incredibly rare to get it correct on the first pass.

Some of the things I design switch really fast <50ns, other switch "slow" at 500ns, usually it's somewhere in between, it just depends on the system. In my SMPS designs I often switch fast as I can to reduce switching losses because these designs usually run +100kHz, so the switching losses often supersede conduction losses. In motor drive design where currents are usually higher, switching losses are usually less important than conduction losses because they tend to top out at 30kHz. I try to find a good balance.

In other terms, you may be a fast racer, but it's unlikely you'll get the fastest lap time on a race track which is new to you. If you go all out on your first lap, there is a good chance you won't even get to complete the lap.
 
Let's say we are working with a 5uH motor with a 100Vdc bus.

v=Ldi/dt so di/dt=100/5e-6=2e7 A/s

Divide this by 100000Hz switching frequency, and we get 200 A/switching cycle.

If this is a 1000A motor, then 200App ripple on top of that is probably not too big a problem. Drop the switching frequency to 50kHz, and now we have 400App of ripple which starts to become a lot. If it is only a 200A motor, then 200App on top of that is going to increase losses a fair bit.

Should someone really be designing a 200A, 5uH motor that is intended to spend a significant amount of time operating with low BEMF? Maybe not. Maybe we are trying to use airplane motors in ground traction applications? What do you all think?
 
thepronghorn said:
Let's say we are working with a 5uH motor with a 100Vdc bus.

v=Ldi/dt so di/dt=100/5e-6=2e7 A/s

Divide this by 100000Hz switching frequency, and we get 200 A/switching cycle.

If this is a 1000A motor, then 200App ripple on top of that is probably not too big a problem. Drop the switching frequency to 50kHz, and now we have 400App of ripple which starts to become a lot. If it is only a 200A motor, then 200App on top of that is going to increase losses a fair bit.

Should someone really be designing a 200A, 5uH motor that is intended to spend a significant amount of time operating with low BEMF? Maybe not. Maybe we are trying to use airplane motors in ground traction applications? What do you all think?

Honestly, my benchmark for the power levels I'm looking to eventually handle is an Emrax 188 or 208 LV; they're amazingly light (~20-22lbs for 68kW peak, 41 kW continuous), but in the low voltage version they claim ~7-7.5uH inductance, which is going to require fast switching and a fast control loop to manage safely. There's a whole genealogy of builds on here that are based on airplane motors; the 63mm and 80mm Hobbyking/Alien Power motors are low inductance airplane motors that can drive 5-10kW peak out of 3-5lbs of power, and this controller would be essentially targeted at the bigger version of that. I mean, if I can get a solid layout done using silicon TOLL FETs that can push a few hundred amps, the next natural step is to wait for the SiC Rdson values to drop another few steps and populate it with SiC to push the switching frequency up farther.
 
This is a valid question I've often considered.

The fact is, these motors exist and are surprisingly cost effective. They're not bad as traction motors if you can tame them... Tool man's bike is pretty impressive.

thepronghorn said:
Let's say we are working with a 5uH motor with a 100Vdc bus.

v=Ldi/dt so di/dt=100/5e-6=2e7 A/s

Divide this by 100000Hz switching frequency, and we get 200 A/switching cycle.

Few issues with this calc... I do get the point you're making :D
5uH is a phase not phase to phase inductance, and the worst case ripple works out to be at 50% duty when the BEMF is roughly half, so you actually have di/dt=50V/10uH (might be some sqrt3 things in there iirc) which gives 5A/us.

With 100khz switching 50% duty that's 5us on 5 off... So looking at 25A. Maybe a bit more including the sqrt3 like things I've missed out, which makes things much more manageable. A 5uH motor is probably also a 5mohm motor and probably saturates at 200A+ so the ripple isn't too bad from a loss perspective.

Next problem is that the motor with 5uH probably also has a kv of 150 so it'll do 15krpm... Mechanical... 150+krpm electrical... Something has to give. Either it's not 100V ready or it's enormous (68kW emrax is kind of enormous) and 50A ripple is just ok or... Something...

So from where I'm sitting, 50kHzpwm =100kHz switching is a good target. It is sufficient for all motors I've yet encountered or heard of. Faster than that I've yet to see a solid application.

Maxon in one of their datasheets talk about 50khz. They're very low inductance coreless motors which definitely have utility for high speed low loss.
 
I just re read

https://endless-sphere.com/forums/viewtopic.php?f=30&t=43306&hilit=Ca120%2A

Amazing thread by miles, toolman, crossbreak et all.

They have femm of a typical low inductance motor (same as the one i have) and are getting very very good results.

But they had to get Kelly to bump up to 33khz to run it which is not nice for a through hole Kelly.

Your controller definitely has utility.

The problem is, the software is the lacking part. VESC cannot run more than 50khz any more (switching not PWM) and I've observed the can heating is worse at low PWM frequency. I can design a 300A controller that switches like you say in less than a week or so these days (come a long way since summer 2020) but then the bring up and taking the software is still nightmarish.
 
mxlemming said:
I just re read

https://endless-sphere.com/forums/viewtopic.php?f=30&t=43306&hilit=Ca120%2A

Amazing thread by miles, toolman, crossbreak et all.

They have femm of a typical low inductance motor (same as the one i have) and are getting very very good results.

But they had to get Kelly to bump up to 33khz to run it which is not nice for a through hole Kelly.

Your controller definitely has utility.

The problem is, the software is the lacking part. VESC cannot run more than 50khz any more (switching not PWM) and I've observed the can heating is worse at low PWM frequency. I can design a 300A controller that switches like you say in less than a week or so these days (come a long way since summer 2020) but then the bring up and taking the software is still nightmarish.

The answer there is to use an FPGA for control instead of an MCU. It bumps the price up significantly, but if you use one of these you can use the onboard CPU for CAN communication, ABS, and other things that need millisecond response times, but implement the motor control itself directly in hardware, then having a control loop that runs at 50kHz or so should be possible (maybe faster depending on the details of how the hardware is pipelined). It would probably need a new software stack (Verilog for the controls part, and then custom software to run the slow side of things), but the CPU software could probably be forked from VESC and chunks of it could be reused (assuming VESC is native to ARM).

The architecture I'm envisioning is one in which the motor controller doubles as a vehicle ECU; the CPU takes in throttle commands, wheelslip measurements, battery current measurements, and maps those to Id and Iq values via a lookup table (Id should be zero unless you start doing field weakening), then writes those values to shared registers. The FPGA would then take in the Id and Iq values from those shared registers, and then do the actual FOC work. That said, I'd probably test the new architecture with a DRV8350 on a 48V supply and a bunch of $1-$2 MOSFETs to reduce the cost of development fuckups, and then bump up to the big boy once I could comfortably run a C80100 at 2-3kW on the little board.
 
Yikes. That FPGA won't struggle.

How about using one that's solderable by mere mortals and costs closer to 10$…

But yeah... Go for it. There's no shortage of controllers of the usual 20ishkhz type, you'd be better off just buying, but this is something different and not really available so much more interesting
 
mxlemming said:
Yikes. That FPGA won't struggle.

How about using one that's solderable by mere mortals and costs closer to 10$…

But yeah... Go for it. There's no shortage of controllers of the usual 20ishkhz type, you'd be better off just buying, but this is something different and not really available so much more interesting

That's fair; the current Xilinx parts are unfortunately all BGA (though the older Spartan 3s are about 30 and come in 144-TQFP packages. I'd probably develop on the Zynq part and get everything optimized, and once I figure out how much space I need for fast hardware, I'd try porting over to a Spartan 6 or Spartan 7 and switch to a softcore CPU.
 
ARod1993 said:
...if you use one of these you can use the onboard CPU for CAN communication, ABS, and other things that need millisecond response times, but implement the motor control itself directly in hardware, then having a control loop that runs at 50kHz or so should be possible (maybe faster depending on the details of how the hardware is pipelined). It would probably need a new software stack (Verilog for the controls part, and then custom software to run the slow side of things), but the CPU software could probably be forked from VESC and chunks of it could be reused (assuming VESC is native to ARM).

The architecture I'm envisioning is one in which the motor controller doubles as a vehicle ECU; the CPU takes in throttle commands, wheelslip measurements, battery current measurements, and maps those to Id and Iq values via a lookup table (Id should be zero unless you start doing field weakening), then writes those values to shared registers. The FPGA would then take in the Id and Iq values from those shared registers, and then do the actual FOC work.

TI's AM437x (ARM Cortex-A9 single core) can do something similar, it doesn't have FPGA but quad core PRU's that can handle many of the real time requirements (evaluation board - AM473x IDK). AFAIK, it is intended for factory automation, not automotive.

It does PMSM FOC control, supports configurable Sigma Delta decimation filtering in conjunction with AMIC1304, supports various position encoders like EnDat, BiSS, Tamagawa etc. It can synchronize the PWM with EtherCAT network as well. And has a lot of IP's like CAN. But the FOC control loop is run in ARM Cortex-A9 itself.
 
ARod1993 said:
That's fair; the current Xilinx parts are unfortunately all BGA (though the older Spartan 3s are about 30 and come in 144-TQFP packages. I'd probably develop on the Zynq part and get everything optimized, and once I figure out how much space I need for fast hardware, I'd try porting over to a Spartan 6 or Spartan 7 and switch to a softcore CPU.

Spartan's are only FPGA, without an integrated CPU right ?
 
afzal said:
ARod1993 said:
That's fair; the current Xilinx parts are unfortunately all BGA (though the older Spartan 3s are about 30 and come in 144-TQFP packages. I'd probably develop on the Zynq part and get everything optimized, and once I figure out how much space I need for fast hardware, I'd try porting over to a Spartan 6 or Spartan 7 and switch to a softcore CPU.

Spartan's are only FPGA, without an integrated CPU right ?
Right; the Zynq part has a physical CPU that shares a portion of its memory fabric with the FPGA; if I have room for a fast softcore CPU and all the custom hardware I'd need for fast motor control on something smaller and cheaper like the Artix or Spartan then I could save money on the part in question.
 
Back
Top