Author Topic: What do EE's do for a PhD ??  (Read 2048 times)

0 Members and 1 Guest are viewing this topic.

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9102
  • Country: gb
Re: What do EE's do for a PhD ??
« Reply #25 on: July 30, 2024, 02:31:59 pm »
Though the few PhD defenses I've recently attended are all of the form "we used a neural network and magically solved all our problems"...  ::)
... to which the next questions are "how can you prove the problem was completely solved?" followed by "where is the boundary where the problem was no longer completely solved?".

ML inherently has problems explaining why, and there are numerous examples where a tiny change caused failure.
Well, that's the same for human learning. Very few things actually get rigorously analyzed, whether humans or machines are involved. We rely heavily on some pretty woolly assumptions about the universality of things in many areas. In fact, there are many adaptive things where its obvious they won't always converge properly, but do so a high enough percentage of the time to be very useful.
 

Offline mawyatt

  • Super Contributor
  • ***
  • Posts: 3608
  • Country: us
Re: What do EE's do for a PhD ??
« Reply #26 on: July 30, 2024, 02:41:10 pm »


Some day I might go back to my local university or do some online EE courses, but no way do I want to do a full time course load, or get more than a bachelor's, any time soon. Plus it costs a fortune.

If you're paying for a PhD, you're doing it wrong. Either your employer should pay, or you should be getting paid as a research assistant.

One of our former colleagues went to Vanderbilt University in Nashville, TN for his PhD. The company in Largo, FL paid for everything including an apartment, travel, even his salary. This was in the early days of DSP for Communications (Modems) and the company became a world leader in such due to his work. Of course he had to agree to stay on for a number of years for the PhD benefit!!

As we stated earlier, "a PhD can be rewarding both intellectually and financially if the right topic, advisor, university and more importantly the right person is behind such"

Best,
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20144
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: What do EE's do for a PhD ??
« Reply #27 on: July 30, 2024, 02:58:18 pm »
Though the few PhD defenses I've recently attended are all of the form "we used a neural network and magically solved all our problems"...  ::)
... to which the next questions are "how can you prove the problem was completely solved?" followed by "where is the boundary where the problem was no longer completely solved?".

ML inherently has problems explaining why, and there are numerous examples where a tiny change caused failure.
Well, that's the same for human learning. Very few things actually get rigorously analyzed, whether humans or machines are involved. We rely heavily on some pretty woolly assumptions about the universality of things in many areas. In fact, there are many adaptive things where its obvious they won't always converge properly, but do so a high enough percentage of the time to be very useful.

For everyday autonomic actions etc, yes that is how humans tend to behave. And all you have to do is look at people's behaviour and/or "reality TV" programmes to see how grossly deficient that is. The word "idiocracy" springs to mind.

Unfortunately ML neural nets are descendents of Igor Alexander's WISARD. That distinguished well between cars and tanks in the lab, but failed dismally in the field. Eventually they realised it had trained itself to distinguish between cloudy and sunny days. It is said colleagues then refused to acknowledge Alexander's presence on sunny days :)

There are documented examples where changing one pixel in a photo of a road "stop" sign, caused the ML system to change to interpreting it as a "40MPH" sign.

ML radiography interpreters was found to be deciding whether or not a cancer was treatable based on the font used in the radiograph. The training set correctly detected that one font was associated with a hospital in a poor area where the treatment outcomes were poorer.

ML in the (US) judicial system decides that black people shouldn't be released on bail, since that is the prejudice embodied in the training set.

Then there are the more subtle legal consequences - if a company using ML for employment decisions is accused of racial bias, they won't be able to disprove it.

FFI and references, see comp.risks; the 40 year archive is at http://catless.ncl.ac.uk/Risks/
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9102
  • Country: gb
Re: What do EE's do for a PhD ??
« Reply #28 on: July 30, 2024, 03:09:06 pm »
Unfortunately ML neural nets are descendents of Igor Alexander's WISARD. That distinguished well between cars and tanks in the lab, but failed dismally in the field. Eventually they realised it had trained itself to distinguish between cloudy and sunny days. It is said colleagues then refused to acknowledge Alexander's presence on sunny days :)
Quoting a famous incompetently engineered example doesn't really say anything useful. Those people were arrogant idiots. Imperial seems to produce more than its fair share of those.

There are documented examples where changing one pixel in a photo of a road "stop" sign, caused the ML system to change to interpreting it as a "40MPH" sign.
This is an issue with current cars. Sometimes a very dirty and unclear sign fools them, and you can understand it. Other times you can drive through an area of road works with numerous speed restriction signs, all nice and clean and clear, and the car misreads every one in the same odd way. As of this month cars sold in the UK are required to have a road sign reader nagging the driver. Some wanted that to enforce the speed restriction, which thankfully didn't get through.

ML radiography interpreters was found to be deciding whether or not a cancer was treatable based on the font used in the radiograph. The training set correctly detected that one font was associated with a hospital in a poor area where the treatment outcomes were poorer.

ML in the (US) judicial system decides that black people shouldn't be released on bail, since that is the prejudice embodied in the training set.

Then there are the more subtle legal consequences - if a company using ML for employment decisions is accused of racial bias, they won't be able to disprove it.

FFI and references, see comp.risks; the 40 year archive is at http://catless.ncl.ac.uk/Risks/
There are a lot of poor quality engineers out there. They'll produce garbage whatever tool set they have to work with. Don't blame a hammer when you hit your thumb.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20144
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: What do EE's do for a PhD ??
« Reply #29 on: July 30, 2024, 03:41:53 pm »
Unfortunately ML neural nets are descendents of Igor Alexander's WISARD. That distinguished well between cars and tanks in the lab, but failed dismally in the field. Eventually they realised it had trained itself to distinguish between cloudy and sunny days. It is said colleagues then refused to acknowledge Alexander's presence on sunny days :)
Quoting a famous incompetently engineered example doesn't really say anything useful. Those people were arrogant idiots. Imperial seems to produce more than its fair share of those.

There are documented examples where changing one pixel in a photo of a road "stop" sign, caused the ML system to change to interpreting it as a "40MPH" sign.
This is an issue with current cars. Sometimes a very dirty and unclear sign fools them, and you can understand it. Other times you can drive through an area of road works with numerous speed restriction signs, all nice and clean and clear, and the car misreads every one in the same odd way. As of this month cars sold in the UK are required to have a road sign reader nagging the driver. Some wanted that to enforce the speed restriction, which thankfully didn't get through.

ML radiography interpreters was found to be deciding whether or not a cancer was treatable based on the font used in the radiograph. The training set correctly detected that one font was associated with a hospital in a poor area where the treatment outcomes were poorer.

ML in the (US) judicial system decides that black people shouldn't be released on bail, since that is the prejudice embodied in the training set.

Then there are the more subtle legal consequences - if a company using ML for employment decisions is accused of racial bias, they won't be able to disprove it.

FFI and references, see comp.risks; the 40 year archive is at http://catless.ncl.ac.uk/Risks/
There are a lot of poor quality engineers out there. They'll produce garbage whatever tool set they have to work with. Don't blame a hammer when you hit your thumb.

These problems are inherent in ML systems.

With neural nets all you have is vast numbers of randomly interconnected multiplier accumulators and registers. You throw a training set at them, and they somehow assign interconnections and weighting factors. There is no reason to believe that if you present the training set in a different order, you will get the same interconnections and weighting factors.

With a conventionally engineered system you can insert diagnostic mechanisms to indicate why the system is doing something. With ML systems you can dump the weighting factors and interconnections, and nothing else. That is equivalent to trying to understand human thinking by examining neurons firing.

The major problem with neural nets is that the ignorant are looking for quick fixes, and they believe the salesmen/advocates. Given that the ignorant don't understand engineering, it isn't surprising they can't tell the difference between "designed magic" and "found magic".

Even if you find an ML system produces the "right" output with the current set of tests you've thrown at it, there can be no way of telling how it will react to the next test. That's the antithesis of engineering.

I have no problems with ML being used to generate pictures of WarCraft characters or Doctor Who trailers: the result goes through a persons mind before acceptance/rejection. Not so with hire/fire incarcerate/release treat/leave stop/go decisions :(
« Last Edit: July 30, 2024, 03:43:35 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9102
  • Country: gb
Re: What do EE's do for a PhD ??
« Reply #30 on: July 30, 2024, 04:05:10 pm »
These problems are inherent in ML systems.
These are problems with all learning. There can be a fine line between education and indoctrination.

With neural nets all you have is vast numbers of randomly interconnected multiplier accumulators and registers. You throw a training set at them, and they somehow assign interconnections and weighting factors. There is no reason to believe that if you present the training set in a different order, you will get the same interconnections and weighting factors.
Just like teaching a human.

With a conventionally engineered system you can insert diagnostic mechanisms to indicate why the system is doing something. With ML systems you can dump the weighting factors and interconnections, and nothing else. That is equivalent to trying to understand human thinking by examining neurons firing.
A fully deterministic system is more predictable, but we currently have to use humans for many tasks, because we need a level of flexibility those deterministic systems can't offer.

The major problem with neural nets is that the ignorant are looking for quick fixes, and they believe the salesmen/advocates. Given that the ignorant don't understand engineering, it isn't surprising they can't tell the difference between "designed magic" and "found magic".
Just like every new thing that comes along. Humans are serial technology abusers.

Even if you find an ML system produces the "right" output with the current set of tests you've thrown at it, there can be no way of telling how it will react to the next test. That's the antithesis of engineering.
If you teach a human to do something, you have little idea how well it went, and whether they will handle anything outside their learning in an acceptable manner. At least the ML system forgets a lot less.

I have no problems with ML being used to generate pictures of WarCraft characters or Doctor Who trailers: the result goes through a persons mind before acceptance/rejection.
Recently we seem to have seen numerous fiascos where there was no human WTF moment resulting in a cleanup before release.

Not so with hire/fire incarcerate/release treat/leave stop/go decisions :(
Hire and fire is worse than an ML problem. although there is one. The human element is HR staff, whose ability in talent spotting usually seem to be to reject it. All the best candidate can be found in their waste bin.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20144
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: What do EE's do for a PhD ??
« Reply #31 on: July 30, 2024, 04:27:44 pm »
These problems are inherent in ML systems.
These are problems with all learning. There can be a fine line between education and indoctrination.

With neural nets all you have is vast numbers of randomly interconnected multiplier accumulators and registers. You throw a training set at them, and they somehow assign interconnections and weighting factors. There is no reason to believe that if you present the training set in a different order, you will get the same interconnections and weighting factors.
Just like teaching a human.

With a conventionally engineered system you can insert diagnostic mechanisms to indicate why the system is doing something. With ML systems you can dump the weighting factors and interconnections, and nothing else. That is equivalent to trying to understand human thinking by examining neurons firing.
A fully deterministic system is more predictable, but we currently have to use humans for many tasks, because we need a level of flexibility those deterministic systems can't offer.

The major problem with neural nets is that the ignorant are looking for quick fixes, and they believe the salesmen/advocates. Given that the ignorant don't understand engineering, it isn't surprising they can't tell the difference between "designed magic" and "found magic".
Just like every new thing that comes along. Humans are serial technology abusers.

Even if you find an ML system produces the "right" output with the current set of tests you've thrown at it, there can be no way of telling how it will react to the next test. That's the antithesis of engineering.
If you teach a human to do something, you have little idea how well it went, and whether they will handle anything outside their learning in an acceptable manner. At least the ML system forgets a lot less.

You can ask a human subtle oblique questions to see how they reached a decision. You can't do that with an ML system.

ML systems do "forget". All it needs is:
  • you spot a problem in an ML's output
  • you apply more training examples in the hope they will reconfigure some of the pathways and weights
  • you cannot have any concept of how the new pathways/weights will change previously correct output. That's a real problem

Quote
I have no problems with ML being used to generate pictures of WarCraft characters or Doctor Who trailers: the result goes through a persons mind before acceptance/rejection.
Recently we seem to have seen numerous fiascos where there was no human WTF moment resulting in a cleanup before release.

Not so with hire/fire incarcerate/release treat/leave stop/go decisions :(
Hire and fire is worse than an ML problem. although there is one. The human element is HR staff, whose ability in talent spotting usually seem to be to reject it. All the best candidate can be found in their waste bin.

HR droids want to offload the responsibility for their crap decisions onto a computer. The aren't alone in wanting to armour-plate their backs. So do some magistrates, judges, insurance companies, airlines (see the recent Air Canada debacle!).
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9102
  • Country: gb
Re: What do EE's do for a PhD ??
« Reply #32 on: July 30, 2024, 04:51:20 pm »
You can ask a human subtle oblique questions to see how they reached a decision. You can't do that with an ML system.
You can query a well designed ML system, too. You can probe what lead to its conclusions, and probably get more meaningful output than from the average pleb.

ML systems do "forget". All it needs is:
  • you spot a problem in an ML's output
  • you apply more training examples in the hope they will reconfigure some of the pathways and weights
  • you cannot have any concept of how the new pathways/weights will change previously correct output. That's a real problem
We don't usually classify an update in thinking based on new information as forgetting. You can't control a human's learning updates, but its easy to lock down an ML solution if you want to.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20144
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: What do EE's do for a PhD ??
« Reply #33 on: July 30, 2024, 05:08:59 pm »
You can ask a human subtle oblique questions to see how they reached a decision. You can't do that with an ML system.
You can query a well designed ML system, too. You can probe what lead to its conclusions, and probably get more meaningful output than from the average pleb.

Reference please. Or are you relying on "well designed" as being the weasel-words.

With people you can probe the mental model they have of the problem. I accept sometimes it will be just "mental", but that is in itself an adequate result! :)

You cannot even manage that with ML systems since they do not have an identifiable mental model per se; they just have neurons and weighting factors.

Quote
ML systems do "forget". All it needs is:
  • you spot a problem in an ML's output
  • you apply more training examples in the hope they will reconfigure some of the pathways and weights
  • you cannot have any concept of how the new pathways/weights will change previously correct output. That's a real problem
We don't usually classify an update in thinking based on new information as forgetting. You can't control a human's learning updates, but its easy to lock down an ML solution if you want to.

In practice you can't lock down an ML system, because there will always be the requirement to remove newly-discovered edge cases. And there will always be newly-discovered edge cases.

Over-the-air updates are already a problem with driverless cars, because when you get in a car you can't rely on it behaving the same way that it did yesterday.

Have you - like most young software "engineers" - forgotten that "You can't test quality into a product". When you put that to people creating ML systems based on training sets (i.e. all of them), first they pull a face, then they go "la-la-la-la-la".
« Last Edit: July 30, 2024, 05:10:34 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9102
  • Country: gb
Re: What do EE's do for a PhD ??
« Reply #34 on: July 30, 2024, 05:59:59 pm »
You can ask a human subtle oblique questions to see how they reached a decision. You can't do that with an ML system.
You can query a well designed ML system, too. You can probe what lead to its conclusions, and probably get more meaningful output than from the average pleb.

Reference please. Or are you relying on "well designed" as being the weasel-words.
Most ML systems are split into a training system, and a much simpler run time system. The run times are as basic as possible, but the systems on which the learning takes place generally have pretty flexible facilities for getting an explanation for a decision.

With people you can probe the mental model they have of the problem. I accept sometimes it will be just "mental", but that is in itself an adequate result! :)

You cannot even manage that with ML systems since they do not have an identifiable mental model per se; they just have neurons and weighting factors.

Quote
ML systems do "forget". All it needs is:
  • you spot a problem in an ML's output
  • you apply more training examples in the hope they will reconfigure some of the pathways and weights
  • you cannot have any concept of how the new pathways/weights will change previously correct output. That's a real problem
We don't usually classify an update in thinking based on new information as forgetting. You can't control a human's learning updates, but its easy to lock down an ML solution if you want to.

In practice you can't lock down an ML system, because there will always be the requirement to remove newly-discovered edge cases. And there will always be newly-discovered edge cases.
Most ML systems can't be trained, as they only perform the run time aspects of the problem. So, any updates come from the learning system, and the transfer of updates from there to the numerous run time systems can be as orderly or chaotic as you make it.

Over-the-air updates are already a problem with driverless cars, because when you get in a car you can't rely on it behaving the same way that it did yesterday.
That is a management problem. Its a pretty dumb process that pushes updates randomly, and surprises a driver in the middle of a journey with new behaviour, whether its an ML behaviour, or something altered in the car's UI.

Have you - like most young software "engineers" - forgotten that "You can't test quality into a product". When you put that to people creating ML systems based on training sets (i.e. all of them), first they pull a face, then they go "la-la-la-la-la".
You can't test quality into a product, but also you can't built a complex product to be perfect. That always defeats human capabilities. Even the best engineered systems do things that surprise their designers, and take years to fully shake out. There is plenty of denial about the vast amount of work it will take to get, say, a driverless car that needs one or two human interventions per journey to one that might be no more dangerous left on its own than the average human. People find it hard to face the reality of just how difficult awkward cases are compared to more straightforward ones. The sort who can't accept just how hard it would be to properly automate the office cleaner's job, and don't understand that the office cleaner is most likely to be laid off because everyone else in the office has been eliminated from their jobs.

 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20144
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: What do EE's do for a PhD ??
« Reply #35 on: July 30, 2024, 06:35:44 pm »
You can ask a human subtle oblique questions to see how they reached a decision. You can't do that with an ML system.
You can query a well designed ML system, too. You can probe what lead to its conclusions, and probably get more meaningful output than from the average pleb.

Reference please. Or are you relying on "well designed" as being the weasel-words.
Most ML systems are split into a training system, and a much simpler run time system. The run times are as basic as possible, but the systems on which the learning takes place generally have pretty flexible facilities for getting an explanation for a decision.

Please provide references about those "facilities for getting an explanation". Fundamentally it is a serious research topic, which no clear resolutions on the horizon. It will be a fruitful source of PhDs and research grants for decades.

Even where the rules were explicitly coded (i.e. 1980s old-skool AI), in practice it was difficult to determine why a decision was made, and then to modify the rules to make the desired change and no other. In modern ML systems there are no explicitly coded rules, so even that doesn't work.

I'm sure ML systems they are structured that way: the training systems determine the weights and interconnections, and the deployed runtime executes them. Hence you can see that distinction is irrelevant to the points I've been making.

The second reason it is irrelevant is that deployed systems don't necessarily retain all the information that caused them to make a decision, so their decision making process can't be "replayed" back in the lab.

Quote
With people you can probe the mental model they have of the problem. I accept sometimes it will be just "mental", but that is in itself an adequate result! :)

You cannot even manage that with ML systems since they do not have an identifiable mental model per se; they just have neurons and weighting factors.

Quote
ML systems do "forget". All it needs is:
  • you spot a problem in an ML's output
  • you apply more training examples in the hope they will reconfigure some of the pathways and weights
  • you cannot have any concept of how the new pathways/weights will change previously correct output. That's a real problem
We don't usually classify an update in thinking based on new information as forgetting. You can't control a human's learning updates, but its easy to lock down an ML solution if you want to.

In practice you can't lock down an ML system, because there will always be the requirement to remove newly-discovered edge cases. And there will always be newly-discovered edge cases.
Most ML systems can't be trained, as they only perform the run time aspects of the problem. So, any updates come from the learning system, and the transfer of updates from there to the numerous run time systems can be as orderly or chaotic as you make it.

ML, in the modern meaning of the words, are always "trained by rote" on many many individual examples. Old-skool AI systems were "taught by general rules".

Quote
Over-the-air updates are already a problem with driverless cars, because when you get in a car you can't rely on it behaving the same way that it did yesterday.
That is a management problem. Its a pretty dumb process that pushes updates randomly, and surprises a driver in the middle of a journey with new behaviour, whether its an ML behaviour, or something altered in the car's UI.

No, it is a technical problem and a user problem.

Management ought to remove/avoid such problems, but in practice they are only to eager to turn a blind eye.

Quote
Have you - like most young software "engineers" - forgotten that "You can't test quality into a product". When you put that to people creating ML systems based on training sets (i.e. all of them), first they pull a face, then they go "la-la-la-la-la".
You can't test quality into a product, but also you can't built a complex product to be perfect. That always defeats human capabilities. Even the best engineered systems do things that surprise their designers, and take years to fully shake out. There is plenty of denial about the vast amount of work it will take to get, say, a driverless car that needs one or two human interventions per journey to one that might be no more dangerous left on its own than the average human. People find it hard to face the reality of just how difficult awkward cases are compared to more straightforward ones. The sort who can't accept just how hard it would be to properly automate the office cleaner's job, and don't understand that the office cleaner is most likely to be laid off because everyone else in the office has been eliminated from their jobs.

Quite right: the first 50% is easy, the last 20% extremely difficult. 50% is acceptable for WarCraft character generation, but is completely inadequate for important irreversible decisions (e.g. judicial imprisonment, medical diagnosis/treatment, autonomous vehicles, etc).

You are making my points for me. Thanks.

Do us all a favour, and subscribe to comp.risks. It is low-volume and high-quality curated information source - with a 40 year pedigree to prove it!
EDIT: the RSS feed is http://catless.ncl.ac.uk/risksrss2.xml but there is also a usenet feed and I expect you can get an email (about 2 per week).
« Last Edit: July 30, 2024, 06:55:14 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1976
  • Country: us
    • KE5FX.COM
Re: What do EE's do for a PhD ??
« Reply #36 on: July 30, 2024, 06:38:45 pm »
Though the few PhD defenses I've recently attended are all of the form "we used a neural network and magically solved all our problems"...  ::)
... to which the next questions are "how can you prove the problem was completely solved?" followed by "where is the boundary where the problem was no longer completely solved?".

ML inherently has problems explaining why, and there are numerous examples where a tiny change caused failure.

If I were a young Turk looking for a research field today, I'd probably look into ways to build robust systems from powerful but imperfect components whose flaws are not always discoverable a priori, and not always correctable when they are. 

Because that's how everything is going to work from now on, pretty much.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20144
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: What do EE's do for a PhD ??
« Reply #37 on: July 30, 2024, 06:49:46 pm »
Though the few PhD defenses I've recently attended are all of the form "we used a neural network and magically solved all our problems"...  ::)
... to which the next questions are "how can you prove the problem was completely solved?" followed by "where is the boundary where the problem was no longer completely solved?".

ML inherently has problems explaining why, and there are numerous examples where a tiny change caused failure.

If I were a young Turk looking for a research field today, I'd probably look into ways to build robust systems from powerful but imperfect components whose flaws are not always discoverable a priori, and not always correctable when they are. 

Because that's how everything is going to work from now on, pretty much.

It is an excellent field for fundamental research. Unfortunately systems are deployed without that :(

Fundamentally there have been no theoretical breakthroughs in the past 40 years (arguably 60).

For ML systems we haven't even reached the "peak of inflated expectations" yet, let alone the "trough of disillusionment". Hence it is impossible to know when - or even if - we might get through the "slope of enlightenment" to the "plateau of productivity".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5376
  • Country: us
Re: What do EE's do for a PhD ??
« Reply #38 on: July 30, 2024, 07:02:31 pm »
The real way to find out what you do for a PhD nowdays is to read dissertations.  Several schools publish the dissertations of their students, and others aren't that hard to obtain.  In general they are not new theory, but applications of theory to specific applications.  An example that is now a few decades old is application of control theory (observability and controllability specifically) to economic systems.  I would agree that there hasn't been much if any groundbreaking new theory on the level of Maxwell, Shannon or the like but that doesn't indicate that everything is known.  Lots of room left in non-linear systems, time varying systems and the like.  Engineering of the newer high temp superconductor materials into real applications is another obvious area.  Another example of application that has taken off is the thermal bolometer focal plane array.  No new theory, but forty years ago you could have found numerous experts in the infrared field that would have denied the possibility of such a critter.  Taking that from idea to reality generated and/or utilized quite a few PhDs.
 

Offline mawyatt

  • Super Contributor
  • ***
  • Posts: 3608
  • Country: us
Re: What do EE's do for a PhD ??
« Reply #39 on: July 30, 2024, 08:12:25 pm »
Another example of application that has taken off is the thermal bolometer focal plane array.  No new theory, but forty years ago you could have found numerous experts in the infrared field that would have denied the possibility of such a critter.  Taking that from idea to reality generated and/or utilized quite a few PhDs.

Recall seeing Dr Wood's early uncooled bolometer arrays at work at Honeywell Corporate Technology Center in the 1980s, sensitive enough to not only spot a tank hiding in the trees & bushes, but see the thermal tracks it left for the pervious 1/2 hour getting there!!

Later we utilized cooled HgCdTe 8-12 micron detectors to map the chemical atmospheric characteristics passively 5 KM away for Remote Chemical Agent Detection (XM21). These detectors were sensitive enough to detect the heat of a human body at 1000 miles!!

Lots of early work in these fields and plenty of opportunities for young PhD students!!

Today, the most opportunity likely exist with the semiconductor field, as they seem to be constantly changing physics on a yearly basis with the continual node reductions, as "You Can't Do That!!" seems to be their Marching Song :-+

Still can't gather how quickly these latest semiconductor advancements ends up in the general public hands  ;D

Hats off to them, and hope they continue  :clap:

Best
Curiosity killed the cat, also depleted my wallet!
~Wyatt Labs by Mike~
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf