Nov. 24, 2021

028 - Easy entry into the world of AI in fire with MZ Naser

028 - Easy entry into the world of AI in fire with MZ Naser
The player is loading ...
Fire Science Show

Have you ever been fascinated by the capabilities of AI? Did you wonder how the heck can an algorithm beat humans in repetitive tasks? Or make multi-level correlations that we would never be able to figure out? I was as well. And I felt the urge to learn more about this technology, in a way to not be left out when everyone plays with their new toys... But at the same time, I felt this feeling of overwhelm and confusion about this technology. What exactly is it, where to start... Then the wall of multiple choices to take - am I even trying supervised or unsupervised learning? Is my problem a regression or classification? I won't lie, it's hard already, and I have not even really started yet.

And then comes him. Dressed in white (just kidding). MZ Naser.

MZ is not only a genius who seems to have figured it out in the world of fire, but he is also documenting every step of his path in research papers. More to that, he also wrote a bunch of entry-level papers, and a review paper summarizing the basics and explaining the core concepts. Wow, what a service to the community! Please join me in this discussion in MZ, where he literally walks me through the fascinating world of AI in fire and explains where to start.

At this point of the show notes, I would list you a bunch of papers and relevant resources.
(Update: originally  there  was  just link to MZ’s site, but as this got published you totally need to start with this paper:  https://www.readcube.com/articles/10.1007/s10694-021-01210-1 )

 MZ Naser is so nice, he has a website where all of this is summarized and kept updated! If I started listing the resources, I would do you a disservice... You have to test what he has out there.

https://www.mznaser.com/

And also, please connect with MZ at his Twitter and LinkedIn 

Transcript

Hello, everybody. Welcome to Fire Science Show session 28. Today we'll be discussing one of my favorite topics in all of the fire science. And that is the use of artificial intelligence. I consider it one of my favorites because it's something that I would really, really love to learn myself. And I'm exploiting this podcast to bring me the best guests that can explain it to me a bit more. Honestly, I was quite confused where to start with, , with all of this. And after the discussion, as you will hear in this episode, I have a little bit better idea of where to start and how to start. And actually I should go on this journey because I'm absolutely convinced it's worth it. Today with me I have one of the young leaders of fire safety. He's an author of literally countless papers, on AI, in the fire science, I'm actually astonished by the amount and quality of work he's putting through and, publishing. I'm really, really admiring him for this. Um, he's a professor at Clemson University. Yeah, let's just jump into not prolong this because, you want to hear what's after the intro. Please welcome MZ Nasser and let's jump into the world of AI and fire science! Hello everybody. Welcome to Fire Science Show. Today, I'm here with professor MZ Naser from Clemson university. Hey Naser great to have you here. How about you? How's it going? I'm fantastic. I'm about to learn so much about AI. I'm really happy to, or that, Hope so the house. I've invited you here because you are a rising star in our industry. And you're probably one of very few people who has a good clue about how AI is working and how can we use it, in engineering, actually, there's not so many of them, AI luminaries in our community and, through your papers, which are actually very educative. They're not like flashing out you see how advanced algorithms I can use? You, you publish a lot of like introductory level AI papers, review papers, and they appreciate that so much. What puts you on this pathway to, use computers to enhance your learning? This is a very good question, actually. So the first time I learned about AI at was, sometime in 2012 or 2013, I was taking a transportation course. And in that course, the professor was discussing how we can use AI to organize traffic, traffic, lights, synchronize, all different types of Mm. infrastructure. And then I kind of did like a very short paper at the time on, fire, on, I get lost because, you know, once you go to your PhD, you focus on experimentations, every simulation of the, uh, you know, those things, then you kind of forget the AI and once I was done, was trying to find the faculty job. And as you know, fire tech experiments are very, very expensive and you have to have lab and equipment. As you know, you have a massive lab here, in my case, it was very hard to develop this left. So I was, I had to do something and I wanted to do something a little bit different than simulation. And so I went back to my road to AI and that's when things clicked back again, From 2012 2013, maybe 2019 things has rapidly changed on the AI front. Many, many things has changed. We have different algorithms, different training systems, learning systems. So I had to everything else at that time. And hence, some of my papers are just like what you mentioned. They are very on the same level because as I was writing them, I was also learning. I decided Okay. a very smooth way because this is how I learn. So perhaps it will also be easier to somebody who was as familiar with AI as me at that time to go about with those papers. So, so that's a, that's such a cool path. So we basically we're documenting your own ways through the world of AI. That's uh, that's so cool, man. And that also confirms the theory that you just have to be one step ahead from others, uh, to be an expert and, you know, to, to teach you don't have to know everything to, to provide useful guidance. And I really appreciate that you are doing that. And I assume on your path to the. AI, uh, you've stumbled upon the same confusion everyone is stumbling against for me. It's like, What the hell is AI after all? Can you even define it? And then w what kind of AI should I go? Because once you start, like digging, you enter this, uh, loophole with hundreds of algorithms, models approaches, and it's really confusing. So, yeah. How was it for you? Yeah, a hundred percent. I didn't know. four years ago, I didn't really know. or I only knew neuron networks. This is what I was Okay. it prepped on earlier. To me, if this was my math, I could solve anything with neural networks because it was in know Hm. one tool that you can button the data points, it would run. It should give you some kind of a good performance, if not performance on different problems. as you mentioned, nowadays, we have all these different types of learning or these of algorithms. And the easiest thing in my case was I need to learn. I need to learn. So I had to go back and see, okay, what are the basics for supervised learning? What, what is classification? What is regression and , so once you go back to computer science and see those definitions, then you would see, okay, know what? Most of the problems we do in fire engineering are really regression. You know, we have a phenomena. And the outcome of this phenomenon is a number in a fire resistance. It could be like, you know, heating, great burning, great, some kind of a number. you're out to some kind of a number, then this is a very good chance that you are going to be dealing with a supervised learning problem with a sub a component that's going to be a regression if your output is going to be something like a category, for instance, this column fails or doesn't fail, slap collapse, it doesn't collapse. You have charring and you don't have charring. instance, this fire is heat, uh, in a ventilated control, you are trying to put the phenomenon into one group, this is classification. So once you know the problem, you have to define the algorithm, then you'd say, okay, well now my problem is for instance, regression, what kind of algorithms are there out there now that can solve a regression problem? So from there you will go, you'll find hundreds of algorithm that can do the same. So the question becomes what, which one of these algorithms I'm going to, I'm going to use. And the answer is to be honest, you could potentially use any single algorithm of these and if you have a good database, you will come up with a good answer. You would come up with a good prediction to the problem becomes, as you might have guessed is why would I go with algorithm A, instead of algorithm, B or C or D what are the motivation behind these algorithms? the answer to this is interesting because this is exactly like saying, shall I use ANSYS or Abacus to solve a problem? It's basically which algorithm you're familiar with. It's the company. To your own experiences. In my case, I've always used ANSYS. I use Abacus very, very slightly. So if you go back to my papers, they're all ANSYS, thing with my algorithms. You'll see that the earlier work was heavily towards neural network. recently, I've learned more about different algorithms, the modern ones, because now as you know, more than algorithms are almost superior when it comes to trying to for prediction power. to be honest, if you have a nice database, if you run, let's say 10 algorithms out of the 10 algorithms, most likely nine of them will give you R square or of 95%, 90%, 85%. it's the science is really not in running the algorithm, the sciences what did you learn from this algorithm? let's say that you use this algorithm. You have a good performance, but how does this. Actually advances our science or our knowledge, So to break the first wall for anyone jumping into, , I, from your papers I've learned or supervised, unsupervised and semi-supervised methods. And, this seemed like the very first critical choice, uh, one would make, uh, when they enter. So could you like try and briefly showcase the differences and, and give these examples, like So supervised learning term supervise means, you know, the inputs you know, the output. So everything is being. Um, So for instance, let's say that we are trying to figure out if this column is going to fail under fire. We know the column, geometry, we know its material properties. We know if it's going to be boundary conditions fixed, wing, all of these things, we've done a test. So we also know it's fire resistance, or we know when it's going to fail, you know, everything, you know, the inputs, you know the output. So this is supervised. Let's say now, know, all the inputs, let's say you have a group of, columns, you know, all their inputs, but you don't know when they fail. So you would use unsupervised learning and this way the algorithm should cluster or combine the columns that are similar to each other into groups. And then the algorithm would say, well, these five columns are group one, these four columns or group two, these four columns are group three. You don't know the output. You don't know why these are in groups, but if you go back and study the fire test results, you're likely to see that the columns and the group one, maybe they failed within an hour. Group two maybe they Okay. within two hours. unsupervised learning is when you know the inputs, but you don't know the output. You don't know what the phenomena is. You're just trying to group things together. Semi-supervised learning is going to be somewhere in between. Sometimes semi-supervised would be something that say that we have, images of columns failing. instead of us going by image and saying this column, fails this column doesn't fail. We could potentially only label 50 images and the algorithm should be able to label the additional 50 images that we did in labor. So this way it has a little bit of knowledge on the inputs outputs. doesn't have it for all the database. So it's somewhere in between supervised and unsupervised learning. So, if you had a, let's say a supervised algorithm with the database on existing columns, and then you come up with a completely new column, the supervise would tell you when it will fail, based on its knowledge, the unsupervised would, it would tell you to which group of columns this one looks more familiar to. And the semi-supervised could just continue the task you were doing with the previous columns. Was it painting it pink or measuring their moment of inertia or something? Okay. This seems useful. Nice. Uh, reminded me when you said , your technician has an AI. This is what his algorithm would do, algorithm would recognize noise, or maybe like my mic the hoodie. it would label that as this is like noise or this is like not voice because it has seen before through training that this sign of scratching is not really a voice, so you have to take it away. let's just jump quickly from enthusiasm to the dangerous region, because you've mentioned it seen, but if has not seen something, it's very unlikely. It's going to predict the behavior, right? Like if you, if you show it a thousand fires with flashover, it will not know that backdraft my may happen. Right. And this is the problem. This is exactly the problem. The problem is when you develop an algorithm and have a good database and you have good performance, the researcher or need to remember that this performance is only valid for your database to go beyond the database is going to be very, very tricky because when you have a database, you're immediately constraint in your algorithm. So you have a space of, oh, you have a okay. a space of inputs. You can possibly collect everything and you can collect some features of the space. And then for the algorithm on what it sees is features as the whole space. So if you have an additional feature outside of this space. It's going to be very hard to give you a correct production. Maybe it could sometimes if, if the algorithm or maybe if the problem is simple enough, it could. But other than that is going to be very tricky that's very similar to experience, actually. Yeah. If you experienced a lot of things, you're more likely to predict things. That's uh, that was something we share with the machines, I guess. Yeah. W one thing, yeah, experience, value this a lot when we have experienced, usually like at least us, we have a knowledge of what could happen. Like we could see beyond that we have algorithms can't and that's, that's going to be the problem that we're going to be dealing with. We can go beyond what we can see beyond the data. However, very, very good to see between the lines and we're not. So this is how we can compliment both of us, because if you have a complex database for us, it's going to be very hard to visualize for them. It's easy. If it can see things, and this is why they predict things with high accuracy, but that doesn't mean that this prediction is actually something that's physically correct. I had this episode on AI and fire already with Xinyan Huang from Hong Kong Polytechnic University, and Xinyan is doing a lot of crazy things with, , with smoke control fire detection in tunnels and you also mentioned that this, human machine, uh, combination is the most powerful and in a way I had a feeling he would like this AI be a way you could transfer the collective experience of whole industry. And that for me, that was such a powerful and beautiful idea that, , so much knowledge is lost between us. And if we could have this collective mind helping each other, it would be fun, but it's also seems very difficult from the technical point of view to achieve that. Right. Because, like, to what extent the, um, structure of the database is also important, like to what extent you can drop scattered data into an algorithm and expect correct results. Okay. first of all, we're not computer scientists. were appliers.Computer Yeah. the algorithms, they validate them over multiple databases. We just take them. And then we do our own little experiment and we have good performance and we think it works. second part of the issue as the machine learning we're using now, or the algorithms that we're using now, they're highly data driven or correlations driven which negates the purpose of science and not everything correlates that there is a cause and effect. This is why, Okay. myself, I'm trying to move away from all this data driven nonsense and go towards like modern algorithms that at least can give you a cause and effect because if you know the cause and effect, but regardless of how much data you have, always get the right answer. The goal is to know why this. That goes, the issue is not to know seeing this, or I have seen this in 10 experiments and this would happen in the 11th extrovert experiment. There is no guaranteed. Observations help. However, to come up with knowledge, you need to know why, and it's cause and effect. And if if you teach an algorithm cause and effect, then you have to completely negate or move away from the type of learning that we have now in commercial machine learning, commercial machine is purely data driven. Now I'm not saying that correlation or data driven doesn't have a purpose. I do have a, but it does have a purpose and it would work for different problems. for our own, if you want to advance knowledge, as opposed to apply knowledge, is going to be fine for correlation or data driven, because you're looking for a solution you want, see this every day, you want a surrogate model that tells you if you see this, this is likely to happen. You know what. But if you want to know why things happen, can't rely on AI. We have to combine AI into our experiments and we have a completely different kind of teaching methods for AI to figure out cause and effect. In one of your papers or in one of your talks you've used in definition of AI as a computational technique that exploits hidden patterns between seemingly unrelated parameters to draw solutions to a given phenomenon. But often when you see these data driven AI it just seems like really complex statistics. You know, it's like something you could not, plot and having the R square on a single plot is drawn from multiple dimensions, let's say. And, This statistical one, it seems attractive. It's interesting, possibly useful and probably very useful, but it's this exploitation of hidden patterns between seemingly unrelated parameters. This seems like something that could tell us why the facades are burning or why spalling occures, or I don't know why in some conditions, firefighters may die in, in the room. So to, uh, but to achieve these hidden pattern uh, recognition, you need knowledge beyond data, right? You need to have observations. That's what you meant by coupling the experiment and AI AI. need to have, you need to have a methodology of saying this. have experiments. I've seen this, but this experiment is going to be limited by whatever equipment I have sensors. They have. So sometimes I'm picking up data. I think it's noise. Maybe it's not noise. So you have to do a multiple levels of experimentation. Use that data and you have a teacher algorithm at different level what each one means. And then the algorithm should be able to put an overall picture of hidden weight pathways between how these factors react. For instance, many research papers now on AI and and not just in the fire in really any, any field in engineering, the first, the second user, the second section of a paper but it would be like description of database. And then they would list database. And then they would say all these that we have min max average median for this, for our database. And the second thing that always worked is like a correlation matrix and then they would say, this is the correlation between, the correlation matrix is only going to be linear because you're using a linear correlation. There is no guarantee the relationship between the factors themselves or the features themselves or the features and output is linear. So having that metrics or that table doesn't really tell me anything about cause and effect. It tells me that I could use, any statistical model to come up with an equation. the thing about machine learning and statistics is the. The statistics are too, as a model you have to be confined with the assumptions of that model because model that has a certain kind of assumptions, applicable for a set of data. And hence the statistic can't be applied to many, many of our problems because we have highly on the know problems in machine learning. These are non-parametric so they don't have for data. We don't have assumption for distribution. They can fit very complex functions within our database, most likely they outperform statistics in our case, our problems were linear, you won't find anybody using machine learning because why would you use machine learning Yeah. a comfort, such a simple problem. I guess in 20 or 30 years, when this becomes like mainstream, you will see people using machine learning for linear problems. Just like today, we use a CFD for extremely simple cases instead of zone models. I predict is going to happen. And, you didn't say that, but I assume that it's also highly related to the amount of data you have to be able to get this quality or this multilevel correlations there, and, like how much that's w when does one know they have enough data for the problem. And and another question that's something I would be very interested in when I am planning my experiment. How should I prepare myself to create sufficient amount of data, you know? So I want to have a grant and I need to know if I will need a thousand experiments, a hundred experiments or 10 experiments of this type and 50 additional of another type. Yeah, it's, it's a very good question. And it's one, I know there is a lot of research in computer science to figure out this answer. know there are a few papers, like I think the 5, 5, 6 years ago, the minimum number of observations somebody would need is maybe I think 10 or 12. Now the number is, is 25. So if you have a 25 observation, most of the time, you should be able to have some kind of a model that performs in a nice way. I know, however, for some type of learning classification problems, think the minimum number that you could be confident about your results is about a hundred observation. So these are the numbers that we play with, somewhere between 25 and a hundred. Now, the problem becomes is the database becomes wider, you have many, many features, then you will have to have many observation, because if you think about it, a database is a matrix have rows and column, the wider, the matrix so you have to have very, very deep. Figured out some kind of correlation between the different parameters. So the wider it is the more data that would need. And I don't have the, I don't know the answer. I don't know if you have 50 data points. Is this going to be enough for no, I really don't know. I don't think we'll have an answer anytime soon, if we are within, 25 to a hundred points with maybe four to seven features, we should be, or at least the algorithms we have now would be able to give you something that's, maybe with some confidence. And to compliment what I just mentioned, , nowadays the algorithms that we do use, they could be augmented with, uh, different tools. For instance, you can add confidence intervals to your model. So this way, even if you have a small database, algorithm should to the, okay, this is my production. I'm predicting this column to fail in 16 minutes. this is going to be within a confidence of 90% or 70% of 60%. So even if you have a short database, you'd have a prediction, but on the other hand, you have some kind of confidence in your model. It's not like the old days, two, three years ago when we couldn't apply confidence intervals were more, and it just, you know, this is the number that you have We could potentially add confidence. And then this would give us some level of trust because even if we have a short database or a very, very wide database, that prediction is going to be combined with some kind of confidence that would say, okay, my prediction is this much with 70%. In the past, I was, maybe I was not working much with them, but I, I got familiar a bit with some design of experiment methods that allow you to, , figure out the number of experiments to identify the, for example, the influence of variables, on the outcome, Benheken design, , there was aLatin Hypercube, something, Monte-Carlo of course, uh, many, many methods like that. And, uh, I was very interested in them because in my PhD, I did, , a very ugly thing. I've taken like a hundred geometries, a few fires and some combinations of ventilation. And I just brute force to them. And it, it gave me a beautiful array of results to work with and complete my PhD. And I was very happy with it. I'm still, I still am happy with that. I just feel like, caveman bashing, a wooden stick against a wall to get an answer where I could have done this way more elegant in a way. So, so is there also let's say preparation best practices to drive an experiment. So it's useful for this. So there are a few things, for instance, we have some algorithms that what all, what they do is they look at the distribution of your observations or the experiments that you have so far, then they are able to zoom or to pinpoint some regions within that distribution that say, okay, this region or for this distribution, we need to have more experimental data points. So this way, let's say that Okay. to three experiments. have to keep in mind. Maybe I need to allocate maybe two more experiments with this. That would give me To cover a specific, uh, variable for example. Okay. Because if you think a model and how would validate itself, basically runs some kind of, performance metrics along different regions. we normally do is we collect all the data points or the predictions, and we run R square. Now R square doesn't tell you. the, the performance for specific points. It tells you the performance for the whole database or for the whole predictions. But if you plot X and Y would see that your curve at some regions are much, much larger than the other regions. And for those regions, you might want to end up with experimental points because the distance is two things. This one's you want, the algorithm was not able to capture that phenomena at that region. And two, it tells you that there is something there that we haven't seen before. So maybe if you do an experiment, maybe you'd be able to confirm it or deny it and figure out something new. So this will guide you towards the potential outlier that could actually unravel a new physics or something completely unexpected. That's. That's cool. Okay, let's move a bit more into engineering. I I've taken a look on your papers and you have used AI to identify fire vulnurable bridges, designed columns, change measurements, for fiber reinforcement, polymers, strengthens reinforced columns to determine spalling it identify failure of beams. Is there any field of engineering you have identified that it will not work at all? Maybe? this is a no, it's a, it's a very good question. And then this is what scares me most, to be honest, I, I'm a little bit lucky because I get to play with AI a little bit earlier Yeah. to see how it works. And so far it's working really well, which tells me it's either the problems we have are enough for AI, because if you really think about it, scientists, when they develop an algorithm it works for insurance, for medicine, for space, it's not just, know, bending. Buckling, flammability, collapses. have much, much more complex problems. if it works well for complex problems, like finding a new star galaxies, that's a very complex problem. Maybe our problems not that after all, for AI to solve, maybe the, maybe it's complex for empirical methods that we apply or for finite elements methods that we use. They're not really that hard. They're just computationally expensive for FEA or like FDS. You have to run them for a long time. You have to mention it had elements, but collectively, maybe they're not the other issue I'm thinking. I mean, I think about is maybe because algorithm is really a black box and we don't know why, how it behaves. We only see the. What if the output is correct, but the map that links the inputs to the output is not correct. Maybe the output is correct for this database, but once you, once you go for outside of the range of database, it's going to be very, very hard. Or maybe you start to get some errors that the counterpart is the following. The counterpart is very interesting because let's say in structural fire engineering, like we have certain sizes for columns and beams. don't go beyond that. Hm. databases are usually good because, you won't find a very, very, very thin column or a very, very short column. We don't use that. going outside the norm will give you error measurements, but at the same time, we never used in practice. there is like a and cons for, for every issue In the same way, uh, you will have only combustion within limits of flammability. You would have certain sizes of fire only in certain, uh, ventilation factors. So there are like boundaries to the fires that we know empirically and we could work with that. With all these innovation that you show in machine learning, I'm really wondering how hard is it in a field so let's say a field with concrete, like ours. Like, uh, construction. It's not a place of raging innovation. It's, we're using hundred years old standards to quantify a fire resistance and it's not unlikely to change very soon. The problem is not with innovating something. The problem is innovating something and not breaking, everything else, and we're very slow to adopt new technologies, new methods, new. How is it going for you as a pioneer of this technology in construction? You must have a funny reviews for, for your papers. Um, papers, I would get very unique reviews. Yes. I would say now it's much better now and it much, much better, I would say the following a hundred percent with a slow to adapt, I would agree with this, like two, two years ago, when I was trying to push for something in a conference or with a funding agency, the answer is not complete. So I had to reconsider my whole path because I couldn't get anything from anybody. And nowadays I think the industry is interest. Like we, we had, with the American concrete Institute, we had a few talks. We have, we published a book with them on AI, completely with concrete. They were very, very open to it. okay. the steel industry is looking to something in a close by mass and we actually, one of my grants is from masonry they want to use AI to design masonry structure for fire. it's a little bit bad, but that the thing that's good for our case right now is the following. We have a lot of startups many of these are startups. They're trying to automate many of the routine applications or that we use. then these are a way to automate our routine step is to use machine learning because you know, you're not really going above and beyond in something. You're you have a procedure already. You're just trying to make it much, much faster, much more accurate, less error, all of that would accumulate to less time, more money. So there is a push from the industry now, and I think it will grow within the next two or three years because there's a lot of startups. these are startups. If you look at the investors with. They're not really engineers. They're mainly into developing softwares and apps and computer science backgrounds. however, they don't have the domain knowledge that we do. And this is why they hire civil engineers or structural engineers or fire engineers. they learn the problem, it's going to be very, very easy for them to develop solutions. The issue with me is the following. The issue is we need solutions that come from somebody who has been practicing and been educated in our field solve our problems. We can just give the domain knowledge to somebody who's doesn't have our background because they're looking at the surface. We need people to do a fire engineering or structural engineering from the beginning, from the under. Then you would come up with solution that would work better, would work best for our case and will advance our knowledge as well. Because you know, I'm not really looking for a software that tells me is the amount of fire you're going to get. Or like, this is the heat intensity you're going to get, because anybody can do a software like this. I want to know why, if I know why redesign, I can change things. I can come up with unique designs, innovations that we don't have right now. And the computer scientists can give you that you have to be an engineer of that. I'll challenge that because, , for a paradigm shift to occure in a field, it must be done by someone from outside, outside of that field. If you're a graduated fire safety engineering, it's very unlikely that you will change the fire safety engineering completely because of the way how you would have been thought. And there is certain experience factor in your computer in your head that, uh, will prevent you from touching stuff. But our, you escaping the field and jumping into computer science and coming back is actually quite a nice path to, to carve such a path for for new, uh, if also you stern black books many times. And, it seems like that, I don't understand it. You know, I see, I know I can put some stuff in, it will give me stuff out. Even for CFD. I'm in CFDs very, very hard, but I can more or less understand CFD. I don't claim. I understand it completely, but I, more or less know what the equations do, what the schemes are. What's a turbulence model, what's boundary layer. I know these things and I can track back my simulation, identifying each of these steps and going in. And then I see a pattern of neural network and it looks like a Christmas tree to me. It doesn't reassemble a anything equation or something. And, in your paper, in, um, Automation in Construction, engineers guide to AI, you you've championed this explainable AI as a necessity. So tell me what would be this explainable AI and why it would be something that would make me use AI and while I'm not using it today. Yeah. So what I did is it's, it's a very simple exercise. So you take an equation from a code and you apply in a database and you see that the equation from the code that we have to use as engineers does not perform as well as an algorithm. So this fact by it sort of should make you pause. How can you not trust a code over an algorithm? Then the second question would be if the algorithm can predict better than the code, why do I have to use the code when they have a better method that can predict better than the code? The second question is the following. Why does the algorithm do that? the code cannot. Now to know why an algorithm does a certain thing. We have to break that, and see how it, how it does the way it does. And right now we don't know we can do that because one would not commit a scientist even computer scientists. Can't really track how the algorithms work, because everything for them is goal is to get as good of a prediction as an experiment, or as an observation, don't care how to get there. In our case, we do care because we have to justify our decisions. How can they justify using a column with two hour fire rating in the building they don't know why? If the algorithm says, yes, I, if I know why need to figure out what, so this is when, do you have to use If you have AI, you have also above it, explainable AI, explainable AI. The way it does is the following. Each algorithm should be able to tell you exactly how it came up with its own production. It has to break it down for you. So you can understand because numerically It's correct. However, physically, or from an engineering maybe it's not correct. the algorithm of say the relationship between, you know, material and geometry is, is linear, but we know from our experiment, it's not linear. how can I trust it to production? If it negates what physics tells me to do. Now, the problem with explainable AI is the following. It will only explain its results based on the database that you have. So if you don't have a good database or as many features as the physics would allow you to do, even if you have explainable AI, it won't be as good as the one we have in physics, because it's, won't be able to capture all the interactions that one we see in physics. So it's not really about using explainable AI or AI. It's about using a system that can tell you this because. When we use explainable AI, we basically have a very small code within our algorithm that can track prediction back to its origin. did the algorithm link, parameter one with parameter two with parameter three with parameter, for to come up with a prediction that it did. Black box. Doesn't tell you that So for example, if I employed, AI to predict, , smoke movement in a, let's say buoyant plume, it could actually, in the meantime, tell me that it works when you assume the gravity is less on the Mars and then it works while in fact, it's just a matter of entrainment coefficient that is elsewhere with which could accidentally be the same number as the ratio of gravity here in the Mars. But then the algorithm will never know what happened. It just used this and it worked. And for them, it's, perfect for engineering. You need to understand, and this, uh, so breaking it into steps and seeing the more or less what has been done gives you this. Let's say higher power to unravel this hidden patterns and you care less about advanced statistics. exactly. And the other thing is, at least in my eyes, if I know how the algorithm sees the problem, I might be able to figure out any phenomena or sub phenomena that they haven't known before. And maybe this is why I'm very can methods. They're by design, very conservative because we have to be conservative, maybe we could, if we know why we don't have to be extremely conservative. And plus we know now something new that we didn't know before we can figure out why this thing, when I think of AI I always think of, I tool that can give me an answer to why this thing I did to why I didn't know this before. What's this new knowledge to me. I'm not really looking for c orrelation. I mean, my earlier work was heavily data-driven correlation because I mean, I didn't know better, but nowadays it's just makes more sense for some papers, of course data-driven would work because paper itself is for a data driven problem, the overall idea should not always be data driven. It should be more, much more than that. It should always be advanced in science. How can we advance our knowledge having to spend and thousands of dollars? And, and here's the thing, Somebody that does experiment now, years from now, it's forgotten. Somebody goes back and repeat the experiment, and they get the grant to redo the experiment again. Or they don't, you know, expand an experiment. papers wouldn't be published something in a way after a few months, it's shelved away. It's in a database sciencedirect or Springer, it's, it's being online. We rarely visit. But why do we have to continue doing the cycle all over again? If you accumulate our knowledge and we are able to come up with something new, then we can different directions that we haven't seen before. I've started with classifying this, AI into supervised unsupervised semi-supervised and you were talking about regression classification on other ways to formally classify this, but I think the true first choice is, do you use it for discovery or you do it to calculate something, you know? And, I think that's the first thing, because if you just want to figure out a number out of a very complex array of results, you have obtained that you're unable to process, in other way, because the correlations or somethingare multidimmensional. then you probably are seeking a different path than when you try to employ this method to do, to find unexpected and discover something. and, as an engineer, I would like to have better numbers. I would not necessarily be happy, discovering a completely new failure mode because that's, uh, I mean, I made the wrong, or we're kind of screwed as a humanity if I do. But as a scientist, I would like, I maybe care less about the numbers and they would care more about discovery. And, coming back to your thought about, , collectively adding to that. Okay. To what extent the data from the past. Exactly. To what extent you can take your papers from 50, 60 seventies. I don't know, from last IAFSS and use them to develop your own models is how big of an issue is that? The thing is because we talk about fire, it's, a very read it. Like it's a very niche area that, and it's a very expensive area. We don't really have a lot of experiments, or like low-risk tests that we can use, but we do have some, if you want to start with machine learning with a goal to come up, let's say with a black box surrogate that tells you failure mode or failure time, rather than doing a very lengthy calculation. You really have to do what you have, and those would be experiments that the old experiments now, the good thing is the following. The good thing is those experiments are the same ones that we use now by our care. So, you know, in a way, w we have some kind of similarity, the, on the other, on the opposite side material is different. Like for instance, concrete 50 years ago is, is really different than the concrete we have now. So the experiments 50 years ago which is also, which is what the codes are built on, are not built on your new experiments. I'm happy. You've added that. Codes are built on very, very old expert in the sixties, fifties, seventies. So even the code, why would you apply the code now when it's 60 years later, how does that accumulate to what we have now? Wow. a way, I know it may not be as comprehensive or as accurate as doing the knowledge that we have now. However, this is the practice that we're using And if you want to compare, if you really want to compare, let's say code that a procedure against machine learning to be fair, you have to kind of use the data that develop that that a provision, which is the old data and apply to the algorithm and see, the comparison. You have to have a fair line of comparison to, this is how it works but however,. Am I happy with using 50 year old experiments? I'm not happy. No, but this is the ones we have. And this is the standard we have to use. Maybe in the future, it will be different on the good side, on the other dimension, using all the experiments. And let's say that you have two columns, one very, very old one, very, very new. The failure mode is not going to be something new. It would have still failed in the same manner. However, the time Um, it's going to be different because we have different chemicals. We have different stuff now that we use in our material. at different loading, different, temperature. However, the we're subjecting this element to is the same. They still have the same chamber compartment. So it's not really that we're completely using something different. It's just, there are some differences. And even if you want to do a, like a statistical analysis, like a meta analysis, have to compare different data from different experiments. And this is, again, this is why using data-driven analysis is a little bit itchy for me now, because I want to know why, like, at least in my mind, I want to know why this column fails so if it's fail that, years ago, or now there has the mechanism is not going to be some, some new physics. It's going to be something that maybe we haven't seen before. how can I get to that? Something that we haven't seen before by using the same old methods that we have been using for 50 years by now, we would have figured out, you know, So maybe if we use something, method, maybe we can see a little bit different and maybe that a little bit of difference would open up a new experiments for us or a new research area for us that we can apply news That's interesting. And, for, for me personally, I really liked the Xinyans, I'm in the world of smoke control. And I really loved, how he perceived that the CFD could let to let's say more capable algorithms that would predict, the smoke behavior in a compartment, giving you a number the time to, for the layer to fall down or some tenability criteria to be breached. And for example, one of mine main, , areas of research is car parks . I engineer a lot of smoke control in car parks and our limitations is usually that we take a car park, we do 2, 3, 4, 5 CDs in it, for a certain size of the fire. And I assume if I did a sufficiently large amount of. Simulations. And then they have received a new car park with the new architecture. With, I know I've performed 1, 2, 3 simulations in that carpark. The algorithm could technically take over and tell me what would happen in like a thousand different scenarios in that car park. could you use it in like, this, like. Yeah. instance, now what we're doing, we have a database of our, say columns, 200 columns. We could come up, we can ask the algorithm to simulate a data worth of testing, 5,000 columns or 10,000. So this way, I'm trying to capture as many interactions between the features or between the parameters as I couldn't have done using experiments. But however, I still to have that baseline that at least as the algorithm, this is the main, this is the map. This is the average of the distribution of the possible that I could see before. Hm. Xinyan. Any the problem that you're going to be using simulation for it will be expensive not only expensive, you'll have to continue to do it all over and over again. And you know, once we're done with this, say with your design, you throw away this, you know, maybe you clear your desks and yeah. Yeah. throw it away. But if you have this, let's say on an annual basis, and let's say you designed 50 structures or like 50 cases, the simulation that you have is very valuable information because if you accumulate them by five or six or seven years, you'll have a very, very good database that you can teach an algorithm Maybe figured out something that we haven't seen before, maybe come up with some kind of a faster approach to solve the small problem to figure out at least what could be. And this would be interesting to me, what would be a severe case for this parking structure? it without having to house it or doing many minutes in a CFDs before, I may be wrong, but do you exactly know which one would be a severe case right now of hand? It is expert judgment and you use the design. That's the thing, because if you could use this technique to expand the number of investigated cases, you can start talking about risk and probabilities. Like a fire of this probability is giving these consequences with this confidence. And the fire of this probability is giving you these consequences at these intervals, and then you go, and it was beautiful. You could ask the AI, please test any smoke exhaust capacity from this amount of CFMs to this amount of CFMs. And, then it will tell you, okay, if you increase the ventilation twice, you decrease your probabilities by this amount. And if you increase it sevenfold, it doesn't change much from the previous case. So you, you start to get much more detailed. Outcome of your analysis then you would have from investigating multiple points, even if you are the best CFD engineer in the world, because it's not the tool that limits you. It's the capabilities of running multiple parallel cases that's essentially limiting and that there was a solution to that. There exists solutions to that. There was PhD student of Bart Merci and now Dr. Bart van Weyenberge, who was doing his PhD on a response surface technique. It's a statistical technique where you can map, certain, uh, inputs to certain outputs of , multidimensional, uh, surfaces. And from that you can buy running. Let's say 10,CFDs or 20 CFDs you can predict the outcome of multiple CFDs, but it still requires you to solve for a certain geometry in here with machine learning. Maybe you could use results from different building to enhance your knowledge about this particular building. I mean, it's amazing because it already did. The response surface seemed like magic, and this is magic plus. It's if that happens it's gonna be amazing. And I really wish It happened. So if I wanted it, to happen What should I do now? Should I go learn coding? what's the first step? And, Let's assume. I don't know anything about coding. I don't know. Python. I don't know. R I don't know anything. but I just love this. Where, where should they, what should they do with myself? to me honest, I learned everything on YouTube. They have five Yeah. for every kind of For everything. Yeah. can either learn from them. The good news is Uh, at this moment, we don't really develop algorithms. So there are many, many codes. Like if you go to SciKit there is like the cause already there. So you can just copy paste them, Hmm. add your data, run it. And then, you know, you can fine tune, a few parameters. You should be good to go. It's not, it's something that's complex. Once you start to go maybe into explainability confidence, trust, then you have to have some kind of a very good background when it comes to math or calculus, because they're, at that point, it's not just ask them, but as applying, in other words, it's more on the development side. if you want to figure it out causality, or for instance, cause and effect, this is at least what I'm trying to do. you want to figure out cause and effect, then you have to have much more higher advancements for coding. So the bottom box, I mean, this is what i do with my students. I'm not really expecting you to a new algorithm. If we can do that, that'd be great. However, the algorithms we have now can solve many, many, many problems. And all what you really have to do is two things. One understand how the algorithm worked its assumptions as limitation know how to apply it. You don't read it need to code it by hand because the codes are already they're available online. You can just copy paste them from there. to find your data and apply it. And then will see if you apply it. mean, I did this experiment in two of my favorites. took five or six algorithms and I applied them by default values. I just copied and pasted them on our data. And it works. I mean, you get 95%, you get 90% with very, very cheap resources, tells me that you could basically apply the same algorithm for different problems and your are gonna get very good results too. Not all the time, but at least for the most of the time, because these algorithms are extremely powerful. That's a relief in a way, you know, and I had Matt Bonner as well in here and he told the same thing that there are algorithms that exist and you can apply them. Xinyan said the same. You're the , third person to tell you the same. So I must build my brave and, and just try, I guess that's how I learned programming. Actually just, just keep trying and do as many mistakes as you can. And eventually it will work out. So. a new course next fall on machine learning. send you a link for my lecture so you can, you can attend Oh, really? That's so cool. I would appreciate that. for the end that you usually referring to resources and you have your webpage, that's very rich in resources. So I will also link to that. And, you had the paper in Fire Technology about, different types of machine learning that can be used in fire. You had this Engineer's Guide to AI and automation in construction, which was a very interesting case study. And, , it was a really nice paper. W what else should I refer the audience to, to read up on, on this , I really feel , for fire there is going to be, the mechanistic, , review paper is a very good one , for a beginner. I know I sent out professor Rein in like a very short letter. It's going to be published very, very soon. So that, that would be a compliment to that one. Once it is, I'll send you a link for that. Engineer's Guide is, is one of my PIs, or at least when I think about it, this was highlight of 2021. For me, that, that paper Is the one really? I really liked it. There. it, even the times that I spent a lot of time on the title, because I figured that would be something very close to my heart. Uh, there is a third paper it would be mapping function. So it, I think it's after naming function. this is where we're trying to use more of a cause and effect kind of machine learning, or how can we arrive at that cause and effect having to hassle with coding. And we can actually figure out a pathways between different algorithms come up with a function or mathematical expression that can convey to us some kind of, a formula or at least can give us, because if you think of the, out of the fanatical with them, it's a number that for us engineers would like to see. We're trained in formulas. We see that for instance, this is the format that you can apply get an output machine and it gives you a number and hence, this is the hesitation. We can see why we can see how it was. In mapping function. It's a way that it can translate algorithmic logic from a black box into a function that we can see. if you can see it, you can see the interaction between the parameters, your life had to feel much more comfortable applying a function, as opposed to applying a complete black box that we don't know why does okay. That's really, really good. And, some external or, uh, resources like maybe YouTube channel or something that you can recommend send you these. I have them on my bookmarks. I'll send you a, really you a link Fantastic. I'll put it in the show notes and I, I hope, , someone will, find it useful. And, uh, I really, I really appreciate you, you sharing this knowledge okay. Nasser, that was a great talk. And I learned something about AI today and, maybe I'm one step closer to understanding how it can be applied in my field. And I guess there's many heads buzzing now, how can this be implemented in their fields? okay. Thank you for joining us in the Fire Science Show and I hope you had a great time. I had the lot, Thank you very much. I appreciate your reaching out. I appreciate your show. Very good. I mean, I always watch watch the shows when you post them on Twitter. It's a very really? That's cool. I like that. You just do one thing. It's like different components within the fire wrodl so it's much more informative this way. Yeah. that very much. Thank you so much. Cheers, man. Bye-bye. And that's it. Well, what a discussion that was. Maybe I just should open some python right now and then start digging into that. I'm really excited about this world of fire science and the, possibilities it brings. MZ has used AI in so many different aspects of fire engineering, like literally go to his webpage and check out his papers, the variety of topics, where this method was used and considered useful. It's just amazing how wide this technology is. Of course there are caveats. You need to worry about the data quality. You need to worry about what the algorithms have not seen. I hope you've picked up these things from our discussion. That technology is powerful, but just as powerful as the algorithm and as powerful as the data that fuels it. And by far, most importantly, as powerful as the person who's using that. So if. I don't know what you're doing and you drop machine learning on that. Well, you're going to have a machine learned no idea what you're doing, but if you know what you are doing and you know what you're looking for, it's just hell, a complex problem to dig into that. Well then machine learning and artificial intelligence, maybe your best future friend. Now this talk today, I think it's a part of a mini series in the podcast. If you remember, I had an episode with Xinyan Huang from Hong Kong Polytechnic university, with whom I have discussed artificial intelligence and its potential use for smoke control and fire engineering at large. So you definitely, definitely should check that episode if you've missed that one. And I had an episode with Matt Bonner. My friend from Imperial College London, who has also used machine learning algorithms to investigate database of facade fires that we have built together. And it was also quite an interesting to see how well the artificial intelligence has carried the task that took such a long time. So I'm really, really happy to have this in the podcast portfolio. I think these three episodes go together very well. And yeah, if you haven't heard them, absolutely. After this one, you need to tune in, into Xinyan's episode and Matt's episode, I'm going to drop the links in the show notes And yeah, that's it for today. I hope you've enjoyed it as much as I did. As usual, next episode, we'll be waiting for you here next Wednesday. Looking forward to that and yeah. See you around. Thank you for listening.