321: The Value Of Asking The Right Question The Right Way - Episode Artwork
Technology

321: The Value Of Asking The Right Question The Right Way

In this episode of the Automotive Diagnostic Podcast, host Sean Tipping dives into the importance of asking the right questions when utilizing AI tools in automotive diagnostics. He emphasizes the nee...

321: The Value Of Asking The Right Question The Right Way
321: The Value Of Asking The Right Question The Right Way
Technology • 0:00 / 0:00

Interactive Transcript

Speaker A Welcome to the Automotive Diagnostic Podcast. We're going to explore ways to sharpen our diagnostic skills, find learning resources and hear from experts in the automotive field. Hey, have you ever been faced with the challenge of sourcing, installing and programming a used control module in a vehicle? I know a lot of us have. It seems to be happening more and more often today with the volume of control modules on vehicles, the cost of some new ones, or even the availability of new control modules. In some cases, used may be the only option. So what do you do here? I strongly recommend checking out SJ Auto Solutions and Tommy Oliva. Tommy offers a cloning service for used control modules to make these things plug and play for the vehicle that you're working on. In a lot of cases, he is also able to source the control modules if you're unable to locate one for the vehicle that you're working on. But once you get connected with Tommy, he's going to offer fantastic support from start to finish to make sure that that control module is going to work in your application. He's also got tech support that he offers through his website along with some free resources there as well on information about used control module programming. So make sure to check out SJ Auto Solutions. I can't recommend that enough. Hey, what's going on, Automotive world? Welcome to another episode of the Automotive Diagnostic Podcast. My name is Sean Tipping and I'll be your host once again for this week's episode. Thank you so much for joining me. Just me on the show this week and I'm gonna dive into round two of the discussion I was having last week based on using AI tools and LLMs within the My business and repairing and diagnosing vehicles in general. Now I had a couple good conversations with people over the last week based on the first episode I put out. So I wanted to expand on a couple topics that I thought was pretty relevant and maybe things I didn't touch on or go in depth enough to. But then also having, you know, talks spur some, you know, different ideas that I wanted to get out there. So anyways, we'll jump into this today again, kind of be an expansion on what we talked about last week and we'll see where it goes from there. You know, I don't mean to overload on AI, there's a lot of other things to consider. But like I mentioned last week, it's such a powerful tool that I've been able to utilize and that I'm learning as I go within my day to day job in the cars that I'M fixing that. It's hard not to talk about it. And you know, for everybody that listens to the show, right, and we're all out there diagnosing vehicles or at least most people listen to the show, that's you're doing that at least some portion of your job. I don't see why you wouldn't start to utilize some of these tools to help you out and increase your throughput and your productivity. But anyways, let's, let's get into the topics today of what I wanted to expand on. So one of the things that I want to make, you know, really clear that is important thing to think about as you use these LLMs to help you either understand a topic or just get the information in general is when we're putting a prompt in, we're asking it to either do something for us or provide us with some sort of information or confirmation of information that we're trying to figure out. And I'll give you a few examples here. But a lot of times you hop on ChatGPT and you ask it a question, right? How does this work? Where do I find this? Where do I go to execute this task? Right? Or can you execute this task? Can you take this bin file from this EEPROM that I just read and find me the mileage, okay. Or VIN number or immobilize your data or whatever, Right? And by the way, you can put bin files in there. And it's actually pretty cool to play around with it. But here's the thing, it's not always going to be right. And again, I mentioned that last week, it says at the bottom of the prompt for the OpenAI model, I think most of them say that is they'll give you wrong answers. Right? And so a lot of the idea around this is you just put an input in and you get, you know, instant accurate expertise or feedback or information that should be 100% accurate in regards to whatever you're asking it to output. And it's just definitely 100% not true. You're going to get, you might get right information, but you very well could get inaccurate information or incomplete information out of it. And I think most people that use it understand that. But I've just seen a lot of when people ask it a question, it gets it wrong. It's like, see, this thing is, you know, useless. There's, there's no, you know, reason that you should be using this day to day because it's completely inaccurate and it definitely can be 100%. But the way That I think about this when I use this and I get an output from an LLM and there's some stuff that goes into the prompting. I'll expand more on that as well because the prompting matters, the context of information that it has to reference matters. But what I think about this output of the LLM as is just plausible information that requires some human discernment. Okay. The output that it gets could be the correct information, it could lead me in the right direction, but it really requires some human discernment to see is this relevant and accurate to the situation, to the context that I'm utilizing it for. Okay, now this is where it is really beneficial to already be an expert in the subject matter in which you are asking it questions about or to have a really in depth, depth understanding of what you're working with or what the expected outcome might be when you're asking it questions relevant to diagnosing a car, reading a bin file out of an eeprom, looking at a picture of a circuit board and saying, hey, is that an EEPROM or not? And if you have no cursory knowledge, like you have nothing coming into it, you're a complete beginner. One of the problems about large language models and the outputs that they give is they sound very confident, right? The grammar and the smell, the spelling is perfect, they give you an abundance of text, they're very wordy unless you tell them to be otherwise. Right. It's just there's so much information that's spit out instantly and you're like, wow, that, you know, how could this be wrong? It's so confident. Even if you don't think that the product that's directly output to you, especially compared to your, let's just say your average human output that you might get on a Facebook post to a question. Right? The comparison is completely different. And so you're like, wow, this really seems right. Right. It just inspires confidence based on how it's present to you in the output. Okay. So what I'm saying is if you are a complete beginner, you could be just baffled by. Right? That's the, that's the old phrase that you'd hear in like sales settings. I remember, I remember this clearly. I worked at a Northern Tool for a period of time and they wanted me out on the floor selling, you know, water pumps and zero turn lawn mowers and stuff like that. And I was pretty young and I'm like, I don't, I don't know anything about half of this stuff. And I remember the Salesman, who was the top salesman there? He did very well for himself. He's like, I don't know most of these things either. He's like, I just put on a show and I've been doing this long enough that I can look at a product and I can just baffle them with some information. Now, is that right? No, but the fact of the matter is he did really well. He was the number one salesman at that store for a very long time for a reason is he was able to BS his way through it. Right. So what I'm saying is that presentation of the output to someone can fool you into, you know, thinking that, wow, this, this person is very competent. Right. I think Fonzlow's talked about it several times. The competence and confidence correlation, right? If you are confident in your response or your answer, you're going to seem more competent even. Even if you aren't necessarily. And that's again the effect that you might be experiencing here. But so if you have no, you know, background knowledge about that, if you don't know anything about that zero turn lawnmower, and that guy is just spouting off facts that are complete bs, but he does it with a ton of confidence, you're gonna believe them, say, yep, that sounds great, I am convinced. Right. And so that's what we get a lot with these outputs from LLMs. Now, does that mean that they're, you know, intentionally deceiving us? No, it's just that it's designed to give you an output. Now, with that being said, there are ways that you can instruct it to be more accurate. You can give it specific instructions within projects to say, you know, the information that you provide is, it's critical that you give me the accurate information or ask questions to me if you need more context in order to get me the accurate information or just tell me that, you know, you don't know, or the information's not available if you're not 100% sure. So you can dial this in in that regard. But I haven't necessarily done that except for very specific prompts that I'm asking about. I would rather take, see what it gives me and then use my knowledge and expertise coming into the situation to assess whether that output is accurate, whether it's relevant and whether I need to question this more. Look further into it and being a expert, and of course that term is going to be subjective, but having a wealth of knowledge in a specific area is going to be so beneficial to you using these tools. Right. This is going to it's not, it shouldn't be replacing your thinking, it should be amplifying your thinking. But in order to do that you have to have some of that knowledge coming in the door. And that makes these tools so much more useful. If you can call BS on that answer, right, or you know enough to say, well hey, wait a second, what about this? And then it will come back and it will do some more research and maybe it can't find what you're looking for, right? And that's, that's the other piece of this is it's only as good as the information that it has access to. But maybe, maybe you didn't ask the question correctly or you didn't give it enough context. And here is where you can actually use the LLM to help you understand how to prompt correctly. And that's prompting is so, so important. I think they have entire jobs around like you know, prompt engineering for people to be able to give it the right context, to ask it the right questions and you get answers that are worlds better than if you just gave it a simple question with no context. Let me give you an example here. This was one that we did the other day that was radio out of an 07 Cadillac Escalade. And TLC wouldn't do the VIN rewrite. So it show up is locked. You know, usually you can just do an update to these radios and it'll change the software and then it'll write the VIN the same time. This one didn't want to go through that way and we tried. So we're like, okay, just pull the radio out and we'll find the eeprom, we'll swap it over and I'll be good to go because that's where the VIN is stored and a lot of GM radios and I had some information on a newer Escalade radio and where the EEPROM chip was. Turned out the innards of this one were a little bit different. And I actually have my intern working on this while I was gone. He's, he's really into the board work so we're kind of just letting him have at it. But he's still learning. And I didn't have any specific info on chip location on this specific board, but he finds one that you know very well could be an EEPROM based on the location of it and the fact that it's an eight leg chip that you know, physically looks like the eeproms that we're used to. But the markings on it don't signify a Typical EEPROM family, right? Like you'd see a 24, 25 or 93, something like that didn't have those markings, but it was close to the processor, right? So these are all things that we look at when we're looking on the board. Like, let's find the processor. Okay, let's look around it for any of these eight leg chips and then we'll see is this an eeprom? Because the odds are likely that it is. Now, radios can be a little different. I. I have found on some GM radios that the EEPROM location is not as close to the processor as you would expect. And quite often in GM radios, the number that's used on the EEPROM is not. It doesn't correlate to a typical EEPROM family. Okay? Now all that stuff I just said, that's previous experience that I have coming into this job. I've cloned a number of GM radios. I've found the EEPROMs with some help from other people, with trial and error, with googling the numbers on the chip and finding stuff in forums to know that, okay, it might be near the processor, but it might not. And when we find it, it might have the normal EEPROM family markings that we're used to, or it might not. That's all really important information that I need coming into this and I'll show you why. Like, we're using this with CHAT GPT and, and try this. Take a picture of a circuit board that you are working on, right? And try to get the markings clear because it will be able to pick the text out of the picture once you put it in. And then you can ask it like, hey, is this an eeprom? Which is exactly the question I gave it when I put this question into ChatGPT. Now, I'm talking about using good prompts here. The reason I didn't do very good of a prompt or didn't give it a whole lot of context or information. So I was out of town. I was trying to help via text, help my intern figure out if this is the EEPROM chip that we need to swap or not. So he sends me a picture of it. I pop it into ChatGPT and I just say, is this an EEPROM? Now it may be able to figure that out, especially if it has a marking on it like 24 series or 93 series or whatever. This one did not. The number on it was 280-332-04. And then there was a couple other markings underneath it SN0653. Those aren't your typical like EEPROM family markings, but I put it in there, I said is this an EEPROM? Okay. And ChatGPT says, looking at your photo, the 8 pin IC marked it gives me the text that's on there. So I know at least I, I could verify at that point. At least it can properly read the text that's printed on the top of the chip. It says it is not an EEPROM and not an eeprom is in bold. Right. So it's very confident that this is not an eeprom. Says that package and numbering style belongs to a Texas Instruments Regul driver IC family in the SN prefix. It looks like a power management IC or a driver, not a memory device. And then it goes on to say a few points. EEPROMs usually have an identifier like 24 CO2 or 93 C66 or 95 020, often with an at, st or m prefix which is 100% accurate. It's not giving me inaccurate information there. That's all true. Now this chip doesn't. But again, previous experience tells me that Some of these GM radio ePROMs have this type of mask on there, if you will. I don't know if this technically considered a mask, but it's not the typical markings that you would see. It doesn't mean it's not an eprom. And it goes on to tell me more about the ways that they're typically identified and what the numbers mean. Right. Like 16 would say on 16 kilobytes or kilobits. Sorry, so this is based on this, this is not an eeprom. Then it goes on to say, take a bigger picture of the board and we'll see if we can identify either, you know, what chip might be the eeprom or if there is one located on this board and there were other spots on the board. But again, I'm actually thinking that this might be the EEPROM for this radio. And I'm not 100% convinced based off of this output that ChatGPT gave me. Now if I knew nothing about EEPROMs, all I knew is I was looking for an 8 leg chip and that's about it. And I took a picture of this and I got this and I saw in bold, this is not an eeprom. I'd more, more likely be convinced that is correct and I would move on and I would start looking at other things to try to find this. You know, if this LLM is my main resource for this information. And luckily it's not luckily. Again, I have, you know, previous experience with these. And you can still do some Google searching here. And Google search isn't dead. There's still a lot of really good information out on the Internet. And I was actually able to find a forum that listed this chip in reference to the model that I was working on, Cadillac Escalade and the radio, and even said what line of the hex the VIN number was on. So you could go in and you could just edit the VIN number if you wanted to. You didn't necessarily have to swap the chip. I, in a lot of cases with these, just prefer to swap the entire chip rather than trying to edit things. Just in the weird case that you run into some, you know, check some air. Not. Not the case on GM radios that I've experienced. But you never know. I think it's just easier, especially when I'm having, you know, my. My new guy do it. Just swap the chip. I know that he's capable of that. Anyways, off in the weeds there. But Sam it. I look and I keep searching and I find out that, yeah, this is an eeprom. I actually just googled that top number and that's what got me to the forum. And that's okay. Pretty good confidence when we find another one that says, yes, this exact number is an eeprom. It told me the family, okay, this is it. So we swapped it and it worked. And that unlocked the radio. So working this Escalade didn't fix this problem, by the way, but not our diagnostic. So anyways, I went back to this prompt that I had and I said, hey, turns out this was an EEPROM. It's for Note 7, Cadillac Escalade radio. Which I didn't give it that context before, but I asked it, what information or what context or what way could I ask this question to you? That would be more likely that you would give me an accurate answer. Right? Because what it gave me was inaccurate. Now it doesn't have reference to the information potentially. Now it was out there on the web, so could it find it? Maybe. But what I want to know is directly from the horse's mouth. How can I ask better questions of you? And this is kind of like the secret sauce here is you can use the tool to figure out how to use the tool better. And I mean, there's stuff like that out there, right? You can read users manuals for stuff, but this is just completely on a different level where you can become so much better educated at the use of a tool by asking it, how do I use you better? What information could you have had to get a better outcome here? And it gives me a whole bunch of awesome information for prompting so that it can get the correct answer or get me closer to a correct answer. It says it doesn't have universal reliable access to all proprietary or remarked component databases and chip markings and says that chip markings can be be wildly inconsistent. Goes on to say, okay, well how, how do you get accurate information next time that we go through this, here's how you stack the deck in your favor. It says, number one, provide as much functional context as possible. So it gives me an example prompt here of what I how I should have asked the question because I just said, Is this an EEPROM? This chip is from a 2007 GM Delphi Radio Board. It's an 8 pin SOIC located near the microcontroller and crystal, likely a serial memory. Now that might be leading it a little bit, but this is the example prompt it gave me. And Then marking is 280-332-04. It just has me read out the chip. It was able to do that. Can you identify it or its most probable equivalent? Okay. And it says this, this context anchors the model to EEPROM type. Reason increases accurately, significantly. Now you can lead this a little too much. You could go in and say, hey, I know this is an E prom. Will you tell me which one it is? And that's probably not the best way to do it either. You want to leave it open ended so that it can tell you if you're wrong. Right, but there's a lot more context there. I could have told it what this was out of saying, hey, this is out of an 07 Cadillac Escalade radio. This is near the processor, it's on this board. I think this is the eeprom. Some of these EEPROM chips that I've seen in GM radios are not marked with the traditional markings. Can you help me verify that this is an EEPROM or not? That would be a way better way to ask this question. Okay, and so this is what I'm talking about here with asking the correct question is going to help immensely in getting a better answer and learning how to do that. Right. So if it comes up with something that you're like, oh, this complete BS answer and again, you have to have the wherewithal to know that that's the case, but you run into that, then you just ask it, okay, well here is what the answer was once you actually figure it out, however you do that, how could I have prompted this better to get this outcome? And then you can learn as you go how to utilize this tool better. Okay, so there's a lot going on here. Again, that existing knowledge base coming into utilizing this tool is so critical, which means you still have to learn all this stuff yourself. You still have to be the expert if you want to use this in an expert setting and at an expert level, you got to be the expert of the subject matter coming in. So you can call bs, so that really confident person in front of you spewing bs, you can say, nope, that is not correct. Right. And we have to do this in the real world, too. By the way, if you've ever watched a YouTube video or been to a class and you're like, okay, that guy's real confident, but I call BS on this. Right? We've all experienced that. I know I have. But again, I've also been a beginner and been sitting in that same class. I'm like, wow, this guy is just a freaking rocket scientist. I'll never be on his level. The stuff he's doing is so crazy complex. How does he even. And, well, it turns out a lot of it was just, you know, showmanship. Right? It's again, that confidence that baffle me with, okay, so again, you have to be. You have to have that level of understanding of the topic in order to make that call. It's the same thing here. So, you know my opinion, it's not a whole lot different between attending normal training, getting it from another human, watching a YouTube video, listening to a podcast, by the way, and using one of these LLMs. Now, the better thing here is if I'm asking the LLM, how do I get a better answer out of you versus asking the really confident instructor, like, hey, I called BS on that. How do I get a better answer out of you next time? I don't know the response that you're going to get from that person putting on that class. I don't. Try it out. Let me know how. How it goes. I'd be curious. But the other piece of getting great outputs from this is the information that it has available to it. Now, these have been trained on the Internet and a whole lot of other things. Some of the new models are using synthetic data in order to train their models. I don't really understand exactly how that works, but I heard an interview, and, you know, one of the creators or owners of one of these companies is saying, Basically we ran out of data to train this on. We've trained it on everything that is out there. Now we've got to make synthetic data to train it further. That's wild to even conceive, but my thoughts on it at a very, very basic level is there's a lot of information behind paywalls or there's even a lot of information that's just not documented or a lot of information that just doesn't exist. Right. A specific unique situation. Right. Those one off problems that you'll never see again, that's not documented anywhere because maybe nobody's ever experienced it before. Right. So there's a lack of information on the backside in one way or another. And so the number one, the understanding that there is, you know, a lack of information in some cases, but then taking an extra step to make sure that it does have access to the information that you're looking for. Now I mentioned last week you could do a deep research or web mode when you're doing your searches and that gives it, I think more of a concentrated effort to find the information and the details that you're looking for rather than the quick, you know, 10 second search that you normally get from the prompt window. Now it takes, you know, five to ten minutes. So you're, you know, I don't know, 10xing the search time for a specific topic, but the output that you get I think is going to be significantly better, more detailed, thoroughly researched. So I didn't try it, but had I put that picture in there and did a deep research mode, I think it's more likely that it would have come back with the correct answer. It might have found that forum that I did when I Google searched the number on the eeprom chip. It's been my experience using the deep research mode that it is more likely to find those little bits of information out there that don't come up within the first search. It's kind of like, if you want to think of it like this, like the regular prompt to an LLM is going to be like looking at the first page of Google where a deep research mode is going to look through a hundred pages of Google after that search and go through every single website. That's the my best analogy to the difference between the two. But you only get so many of those deep research prompts per month. It depends on your pay plan and all that stuff. And it does take more time. So I'll kind of save them for something like, okay, I need this thing to dig on some info for me. Find, find me whatever you can on this particular topic. I had a, a code in a BMW the other day that I had never seen before. And it was, it was sort of vague on what it meant, but it was in a limp mode for the engine. It would basically hit a rev limiter and it was only a code in the DME and it said energy saving mode but I didn't quite understand what that meant at the time. And all data and identifix, nothing on there, like nothing. It just told you that code doesn't exist. And turns out you said turn it off a transport mode. But that's where I found that information out because I hadn't run into that before. Not a typically a BMW technician. And the, the deep research mode is what ended up finding that for me. So you can utilize that again to dig for some more information. But another thing is give it the specific information that you would like it to reference. Now this doesn't always work because if you're searching, you're trying to figure something out. Maybe you don't have access to said information, but maybe you do, right? Maybe you have a PDF of a owner's manual or user's manual for a particular software or product. Maybe you can get the description and operation, maybe you can get the DTC set criteria right? Things like that. You can feed into this and say hey, reference this particular information or reference. Or maybe you've built up your own library of information on some sort of internal drive or database and there are ways you can link it up. And I'm just working with this right now, getting this figured out with the business model and I mentioned that last week. Do be aware if you're just using the regular public model, it'll train on your data and it's potential that that information is going out for everybody to use. I don't quite understand the, the privacy if any. I think I listened to Sam Altman talk in a podcast about how, you know, people put some really private stuff on here and maybe it's not necessarily so private. So yeah, do be aware of that. But at least with the business model it says right at the bottom, it does not train other models on your data. So it's self contained. I hope you know that. I'm going, I'm trusting that, that, that that information's contained because here's the deal, we have worked really hard and maybe you have as well to build up a bulk of data based on what we do, diagnostics, adas, keys, programming, and we were broken down by manufacturer, by model, line, by year, by type of problem, by type of module or ADAS system and their notes right from the field, right? We had this code. Here's the problem. We couldn't calibrate this RADI radar. Here's why this key wouldn't program to this system. Oh, there's two different FCC IDs. Here's how we figured out what the right one was. That sort of stuff where maybe it's out there. Usually if we're writing it down in our drive, it means we weren't able to find this information elsewhere. Okay, so can I link that bulk of data to the LLM and now integrate those two things together? I've just started been able to do that. Now where I'm going with that is the information that it has access to is completely different than just going out onto Google right now. This is a ton of work. You're not, if you, if you don't have a bulk of data stored up like you don't have that, you're not gonna be able to do this tomorrow. But maybe you can start tomorrow and start saving stuff and then, you know, work towards that. But I'll tell you what, it's so, so, so impressive what it can do with the correct information, right? And I've been, I've been really impressed with that piece of it. So asking it the right question, prompting it the right way, giving it the correct information in order to reference to give you the best output possible and then also just understanding that hey, some of this can be bs, but you can call it on it it but you kind of have to be that expert going in in order to do that to be suspicious of the answer, right? I mean, I guess you could be suspicious of it just in general and ask it more questions like poke and prodded a little bit, be like, are you sure? Tell me why you're sure. Give me a 100 confident answer in regards to this and see what it says. Right? Again, that's the other thing. You're not going to bruise its ego by saying, hey, I think you're wrong. They'll just, just, you know, either agree with you or maybe it'll explain why it thinks it's right in this situation. And all of that should also be considered if you're going to use it for education, which I mentioned last week as well. You can use this to learn about specific topics or help you educate yourself in specific areas. And you can use a back and forth like with a voice mode or again you can just, just Talk into this thing. It's 100% the best way to go. Just use the voice transcription. And it's amazing how quickly you can get information back and forth. And again, when I, when I talk and I give it prompts, I can give it an annoying amount of detail, but that's better for the outcome. That increases the, the quality of the prompt and the quality of the outcome. The more context I give it when I talk, I can get all kinds of stuff out there, just like I'm doing now. But when you're using this to help you educate yourself on a specific topic, you just want to be aware again, that it's only as good as the information that it has access to. So here's an example of where you could potentially use this through Laura Career. Right? Most of us have ASE certifications, right? And so we study for the test so we can go and pass the test. This is something where you can give it the exact information that it needs and say, hey, here's the composite vehicle or here's the, you know, the A1 outline for the engine repair test, right? And you can get those right from ASE tells you all the topics they cover. Or with the L1 test, right? Or the, what is the ADAS one? Is that L3, L4, one of those two, you can give it the composite vehicle from that and say, hey, you know, start quizzing me on the systems on this, you know, particular composite vehicle. Now you're giving it the specific information and then you're asking it, hey, quiz me till I understand what's going on here. Do it in multiple choice format. And if you want to do it like the test and then help me understand if I get a question wrong, why I got it wrong. And it's going to be excellent, excellent at something like that. And you can use it as much as you want, right. If you're gonna do a practice test online, traditionally with these ASE tests, you're gonna be paying all kinds of Money for like 10 questions that are probably 10 years old too. But something like that where you can give it the exact info. I wanna get better at this text, right? This specific topic. It's an excellent tool. If you're not feeding it, you know, specific, exact information, then, well, maybe you don' it's getting from. You should be a little bit more cautious on it. But it can be used as a learning tool as well if you are not super confident in a particular area going in. So that's where I'm going to end this one here today. I'm sure you're plenty tired of hearing me talk about AI, so we'll shift topics going in next week, but this is stuff that I wanted to expand on based on conversations that I had with people last week. So thank you so much for listening. Thank you for the input on the topic. I find it really interesting. I obviously. Hopefully you do too. Hopefully you learned something over the last couple weeks and the episodes, and maybe we'll have some more in the future based on uses that I find and input from you guys out there. So that's it for now. Just want to say thank you for listening again and let's get out there, start fixing the world one car at a time. Sam.