June 7, 2023

104 - Experiments that will change fire science pt. 6 - MaCFP with Arnaud Trouve

104 - Experiments that will change fire science pt. 6 - MaCFP with Arnaud Trouve
The player is loading ...
Fire Science Show

What makes an experiment truly groundbreaking, and how can researchers plan and execute such experiments in fire science? Join us as we chat with Professor Arnaud Trouve from the University of Maryland, a co-chair of the MaCFP group at the IAFSS, to uncover the answers to these burning questions. Arnaud offers valuable insights into the creation of a structured, repeatable, and accessible database of knowledge, and how to design experiments that will revolutionize fire science.

We dive into the challenges of gathering data from manufacturers who don't share their information and the difficulties in modelling phenomena like underventilated fires, flame spread, radiation and soot. Arnaud also emphasizes the importance of well-controlled, well-instrumented experiments in fire research, and the need for computer power to solve fundamental problems in fire science. Moreover, we discuss the MaCFP Workshop and the three different solvers that make up a fire model, touching on the importance of IAFSS's endorsement of MaCFP and the resources available to access the discussions from past workshops.

The main MaCFP repository can be found here and the GitHub here.

Transcript

Speaker 1:

Hello everybody, welcome to the fire science show. I don't know if I share this ever with you, but my ambition at the ITB Institute, where I work, is to create a center of excellence on experimental fire science. We would love to one day become the place where experimental fire science takes place. So episodes like this today are I'm doing as much for you as I'm doing them for myself, because I simply love to expand my knowledge in terms of how to do good fire experiments. In the fire science show we had a series of episodes related to the most impactful experiments that were ever conducted in the world of fire science That were the foundation of our models and understanding of fundamental fire physics. I still believe there's many more to cover, but for today I chose to take a twist on this series. invite Professor Arnaud Truve from University of Maryland and a co-chair of the MacFP group At the IFSS, and talk with Arnaud about what makes an experiment a great experiment and how the MacFP group, which is very focused on creating benchmarks experiments in fire science, how they plan, how they execute experiments, what they look in the experiments and what's all their work about. This group is doing a tremendous effort in building a structured, repeatable, accessible database of pristine knowledge, that we have the best experiments, that we have the most useful sets of benchmarks that we have for our models, and this is the subject of the discussion today. So let's hear from Arnaud on how to make a great fire experiment. Let's spin the intro and jump into the episode. Welcome to the Firesize Show. My name is Vojty Wynzinski and I will be your host, as usual. I would like to say thanks to the sponsor of this podcast, ofr Consultants. So this podcast is brought to you in collaboration with OFR Consultants, a multi-award winning independent consultancy dedicated to addressing fire safety challenges. OFR is the UK's leading fire risk consultancy. Its globally established team has developed a reputation for preeminent fire engineering expertise, with colleagues working across the world to help protect people, property and planet. In the UK, that includes the redevelopment of the Printworks building in Canada Water, one of the tallest residential buildings in Birmingham, as well as historical structures like the National Gallery, national History Museum and National Portrait Gallery in London. Internationally, the work ranges from Antarctic to the Atacama Desert in Chile, to a number of projects in Africa 2023,. OFR is growing its team and it's keen to hear from industry professionals who want to collaborate on the fire safety futures this year. Get in touch at OFRConsultantscom. Hello everybody, welcome to the Firesize Show. I am today here joined by Professor Arnaud Truve from University of Bailand. Hello, arnaud. Hello. Before we get to real business, first let me congratulate you with becoming the new chair of fire safety engineering at UMD. What the chair, what legacy it holds. I am sure you will write a beautiful chapter of this story. So congratulations, arnaud. this is fantastic.

Speaker 2:

Yes, thank you very much, votier. Of course I look forward to that challenge, but it will be a challenge. I have big shoes to fill succeeding Jim Mickey, as you know Well.

Speaker 1:

I'm absolutely sure that you are the person to do it and I only imagine what will come from UMD under your leadership. This looks like very exciting times. As you know, in the podcast I have this series that is called the experiments that change fire science, which I talk with many scientists about experimental stuff they've done in the past that really impacted how we do fire safety engineering today. But I thought I will bring you to this podcast to discuss how can we make the next experiment that will change the fire science. So to start off with a tough question what are the qualities about standing fire experiment?

Speaker 2:

Yes, this is a good question. I mean, there are different ways of answering that question. In fact, the first thing I think a research committee must do is agree on the fact that there are a few benchmark experiments that should be basically capturing the essence of their physics, of their phenomena that you want to study and that are worth studying in great detail. So we have that, for example, in building fires. I mean, we collectively agree that pool fires, flame spread along vertical surfaces, perhaps seeing jets as well, but we have some agreement that there are these benchmark experiments, these benchmark configurations that represent most of the essential features of fire dynamics and should be studied in great detail. In contrast to that, for example, if you look at wild on fire, the wild on fire community doesn't have that kind of common agreement about benchmark experiments, and so that's something that is missing. So I think the first thing that a committee must do, a research committee must do, is identify some fundamental experiments, some configurations that basically convey the essence of their problems, and that everybody will agree that we should collectively study them. Now, you studied them numerically, you studied them experimentally. When you study them experimentally, to qualify as an experiment as opposed to a test, it must be well controlled and well instrumented. So well controlled means you need to pay attention to the initial conditions, the boundary conditions of your problem. They need to be characterized so that you understand what's going on. So examples of that are if you are doing a flow driven experiments, so you want to characterize the amount of flow coming in and you want to characterize details of a boundary layer profile if that's relevant to your experiment. You want to characterize the air and training process. Very often in many experiments we are trying to turn a three dimensional problem into a two dimensional problem And that usually means that we want to control the air and training at the edges of our system. So all of this has to be well done and well characterized. If you are looking at a wind tunnel experiments, you want to make sure that the smoke goes out from upstream to downstream, so that usually makes some requirements about the amount of air going to your wind tunnel. Otherwise, you have an experiment that is not as well controlled and you can have smoke basically recirculation to the inlet, and the inlet conditions are not as controlled as you would like. So there are many examples of situations like this where you really need to pay careful attention to the setup of your problem And then, when instrumented, you want to have diagnostics that can characterize your problem, and these diagnostics typically range from global diagnostics where, ideally, you would like to know something about the global heat release rate. So you would like to use a hood and do use a calorimetry, or oxygen consumption calorimetry or carbon dioxide calorimetry comes to mind. So to have the global heat release rate in your system, you would like, if you are dealing with liquid fuels or solid fuels, you would like to measure the mass loss rate so that you have, at least in an average sense, you and experiments have some understanding of the evolution of the mass loss rate. You would like to know the radiant fraction, the global radiant fraction of your system. So these are global features, and then you can add to that, of course, you want to know something about the profiles of temperature species like CO2, h2o or soot. You would like to know something also about heat fluxes if you are doing an experiment where the radiant emissions are important. The list goes on and on, of course, and each one of these items is going to require careful thinking, careful design, basically to design exercise, and you have to also use calibration of instruments.

Speaker 1:

You said we need these fundamental benchmarks. I would qualify you as a, as a modeler. How important it is for you to have the benchmarks that you can rely on, versus just doing the necessary diagnostics yourself and just you know, violating your own model and living with it. How fundamental is this value of having a set of benchmarks for the whole discipline?

Speaker 2:

When we look at an engineering problem, whether it's a building fire or a fire outside this problem, we tend to decompose it into simpler problems. So this is called the building block approach. And so because at the end of the day, when you are simulating with tools like FDS, an engineering problem, you're going to occasionally find good agreement, occasionally find big discrepancies with your expectation of what is measured, and these systems are so complex that you don't know what is the reason why you get good agreement and what is the reason why you may not get a good agreement. So to be able to rebound and make progress on your projects, you really need to decompose, have an analytical approach, decompose the problem into simpler problems. And so the way we do it in a model validation is we look at the problem as maybe a flow problem, as a combustion problem, as a flame spread problem, as a problem that may have comportment effects and so forth. So we basically decompose the physics into a series of simpler problems And then we want basically to have validation experiments for each one of these simpler configurations that capture some element of the physics, and then we can go back to the engineering problem and put everything together with some level of confidence that the different pieces of the puzzle are actually pretty solid and robust and accurate. Unless we do that, we are in the dark. We don't know. We can certainly run FDs for any problem, but we don't know whether this is a trustworthy exercise And we don't even know whether qualitatively it's accurate. I mean, it's hard enough to make models, cfd models, quantitatively accurate, but at the very least we would like them to be qualitatively accurate. When we compare, for example, different design solutions or sprinkler system, design A with design B, with design C, we would like to make sure that the ranking is correct, that A may be superior to C, that may be superior to B. So qualitative fidelity is important And of course we would like to move from there to a quantitative predictability, but that's an even higher standard. That typically is harder to meet.

Speaker 1:

I would take it even further In my own office. If I do my own validation experiments, i do some simulations that I trust There is a competitor who's doing their CFD on their projects And eventually we get on the same project and their client gets two reputs. How does they compare one to another? How do they know which is closer to the truth? How do they know which is more valid? The fact that they validated mine doesn't mean mine is ultimately true and the other report is wrong and otherwise. It was a question with an obvious answer. It is fundamental for us to understand fire phenomena and have ability to speak the same language pretty much.

Speaker 2:

Right. so you're bringing an important point that, in addition to having intermediate validation steps in this building block approach, where we can focus on some aspects of the physics that has been isolated and we think is important to validate independently, you have also, through a benchmark experiment, decided that everyone is going to look at the same problem. So that means that now everybody is going to compare their results, simulating the same problem, and then you can make collective progress based on that.

Speaker 1:

Otherwise, you have people going in different directions, selecting different experiments and as a community we don't make as fast of a progress because of this lack of coordination, Thankfully, today we are living in the world where this is to some extent coordinated, and this is through the IFSS working group on measurement and computational fire phenomena, which you are co-chairing with Bart Macy the MACFP groups, so we call it in the short. I observed this from the site. I have not participated in MACFP yet, but I guess after this podcast I will be convinced to join this fantastic group of researchers. I'm observing it from the site and I see you guys working hard on some really detailed diagnostics and little descriptions of experiments. It looks very interesting, but for someone who never dealt with MACFP or never attended any of the MACFP workshops, can you tell me what it is about, maybe how it came to life?

Speaker 2:

Yeah, the IFSS working group MACFP, the measurement and computation fire phenomena is. The idea was to again bring some coordination to a field where there was not much and also a field that is pretty small field, so where coordination is very important because we have a limited number of assets on any one of these topics for research, so it's important that we coordinate our efforts worldwide. So the idea was to imitate what is done in other engineering sciences communities. My own experience came from the combustion institute, so the combustion community there that has now a lot of workshops, and it started in the 1990s with a workshop that is very well known and it's called the Turbulent Non-Premix Flame Workshop. That was initially organized by the Sandhya National Laboratories and by Bob Barlow at Sandhya National Laboratories in the 1990s, and the idea was to bring together experimentalists and computational modelers around the topic of computations of turbulent combustion, and so you had lamino flame experts, experimentalists, computational modelers, turbulent flame experts coming together And this has been very successful and there's been a lot of progress that has been allowed and accelerated by this common form. So the idea was to do the same. We had a workshop in the New Zealand IFSS symposium in 2014, presented by Assad Masri, who is one of the leaders of the Turbulent Non-Premix Flame Workshop the TNF workshop in the Promotion Science community, and in that presentation, assad was inviting us to basically imitate what is done in other communities and try to do the same. So we two decided from him and we decided collectively to see whether we would have enough people interested in that effort a coalition And the response was immediately pretty positive. So we got together, we wrote a white paper, we engaged the IFSS to provide some endorsement And then we had our first workshop at the Lunes symposium in 2017. Initially, we worked our step very carefully, so we identified a few basic experiments that tend to be pool fire-like experiments, emphasizing the gas phase phenomena. First, there was a recognition that we should also target flame-sprite experiments that combine gas phase phenomena and thermal degradation solid phase phenomena, but we were not ready at the time. So what was interesting initially is that the survey that we did of what we consider well-controlled, well-insurmounted experiments showed that we don't have so many of them, and so that was the first basically lesson learned from this coalition that we had to work a little bit slowly, not only slowly because academic people tend to be slow, but also because the data were not there. So we identified some basic experiments and they are on the Mach-E-P requisite early. But we didn't find much, especially for going wide away to flame-spread experiments And it took us basically three symposia to go now to try to do some new experiments on flame-spread but also some new computation of flame-spread.

Speaker 1:

So this week I did Tsukuba symposium, so we're not even talking about like compartment fires or ventilation during fires, yet not facades or stuff like that. We're talking about really the fundamental building blocks of fire behavior, which then are expanded right.

Speaker 2:

Right. The ambition would be to go towards engineering application and look at compartment fire effects. Look at maybe we've talked also about fire suppression effects, water-based fire suppression effects But again there, before we go there, we have to be ready and first check what we consider simpler problems but that are already challenging enough. And also we want to make sure that we have good data, And what we mean by data is, again, well-controlled, well-instrumented experiments, And when you go closer to an engineering system, very often you lose some control on your experiments and you lose some quality in the diagnostics. So that's you know, and we invite, we want to expand Mach-E-P toward these problems, but we are not fully ready. I think a good example of that is the fire suppression where to be able to look at this, we need some quality experiments on suppression by water mist system or by water sprinkler system. But typically that means two things That means a suppression experiments and that means also some fundamental data on characterizing the spray without any fire. And it's very hard to find good data in the open literature, And sprinkler manufacturers do have them but they don't share them, And so we are in a situation where we don't have access to good data And there are not many academic institutions doing these experiments today and producing this data for us. So, yes, Mach-E-P tend to be on the fundamental side right now. We would like to make faster progress towards engineering configuration, but sometimes we are slowed down by the lack of data.

Speaker 1:

I find it like refreshing It's kind of a sanity check. You know when everywhere you see people claiming, oh yeah, i've modeled a fire spread in a full scale compartment that's a thousand square meters. Or people would claim, oh yeah, i just simulated suppression of the compartment with FDS It's so realistic, and stuff like that When at the same time, where it's fine, as fire researchers are saying no, we're not ready, we have to gather more data on the most fundamental aspects of this physics to really get this solved, get this up to quality of a benchmark test and move on. I find it like reassuring because I like to be in the cautionous camp of fire science that is not rushing too far ahead, yes well we tend to view the problems, you know, in terms of pieces of the physics.

Speaker 2:

So there are a lot of problems that we do very well today in tools like FDS or fire form or similar CFD servers. So luckily for us I mean there are many problems. These tools do turbulent mixing very well. They do well ventilated fires very well. The problem is when you go to certain aspects of the physics. So if you have an under ventilated fire and you're going to have extinction and ring emission phenomena, that's where the models are still not mature. If you are doing a flame spread problem, well, you have a flame that is typically in a boundary layer. If I look at a flame spreading along a vertical surface and boundary layer phenomena are tough, i mean these flames are a few centimeters away from the solid surface. That means that you need a grid that is going to be millimeter scale to capture that physics. So we know how to do it And I hope we will confirm them in Tsukuba. We know how to do it if we bring a lot of computer power to the problem. But then we are doing a research project, we are not doing a practicing CFD engineer project. So the practicing CFD engineer that is going to work with computational grids at a centimeter scale or beyond. We need to provide a solution to this problem, and so right now we don't have the well. Similarly, i mean, if you look at a problem of radiation and soot that plays a major role in radiation, soot is still an unsolved problem in the combustion science community and therefore in the five science community as well. And so if you are in problems where I would say soot plays an important role and a role that it's interesting, i think a lot of our models work well when you have enough soot that the smoke layer becomes optically thick. The difficulty is when you are in between optically thin and optically thick regimes And then things are very sensitive to your soot model And that's where you lose accuracy. I'm sharing these thoughts with you because I think the way we look at the problem is trying to identify the pieces of the physics. There are pieces of physics we do very well. There are pieces of physics we still do not do very well. We still do not do very well And that brings limitation in our ability to predict some of the engineering problems that we are interested. So sometimes the engineering problems do not require that knowledge, and fine And we can apply the existing tools with some confidence. But when you do need that pieces of the physics, if you have a problem where you have a vertical flame spread that plays a key role, you better be careful.

Speaker 1:

People would say that these models don't have to be fully accurate to be useful. I just have an issue where people would go into these models don't even have to be correct to be useful. That's where I put my red line that I would like to not cross. You mentioned radiation. Radiation is the newest member of the MacFP workshop family. It started with gas phase and condensed phase phenomena. Now you have increased radiative heat transfer phenomena. For an engineer who's not a fire scientist, can you give some relatable examples of what gas phase phenomena, condensed phase phenomena, radiant heat transfer phenomena would mean to an engineer and what are the uses of these models eventually in engineering that they would benefit from?

Speaker 2:

Yes, the first thing I want to say on this is that, to me, when we talk about a fire model, we often think as a fire model, and I'm the first guilty of making that assessment. We think about a fire model as being a computational free dynamics model, a CFD model. However, a fire model is actually the coupling between three types of different solvers. You have a CFD solver that is going to describe the flow, the combustion, the convective heat transfer process in the gas phase. In addition to this, you have a solid phase solver that is going to look at even for a heat conduction free not walls and the amount of heat that is lost from the gas phase to the surrounding walls in a component 5 situation, very often our fuel sources are solid, so it's going also to calculate the thermal degradation of the material that is going to provide fuel to the combustion process. The solid phase solver is typically simple. In our fire models It's treated locally in 1D but still displays a very important role and it's a separate problem. Then you have a third solver, which is radiation. The transport of heat by radiation correspond to different physics. It's electromagnetic waves, the wave energy propagating at the speed of light. It's a very different equation and governing piece of the physics. That solver also has different numerical requirements. The way I look at the fire model is the combination of these three solvers. Each one of these three solvers has different physical models, different numerical requirements spatial resolution for the gas phase solver. Spatial resolution for the acid phase solver and angular resolution for radiation solver. All of these requires different types of algorithms, different types of tests, different levels of expertise. This is why we have these three groups right now. We could expand on these three groups, but these three groups correspond to the fact that a fire model is made up of three different solvers and each one of these groups represents one solver. Of course, at the end of the day, everything is coupled in a simulation of a fire problem, but, again in the consistent with the building block approach, we think that we have to first validate things independently before we couple everything together.

Speaker 1:

Now for the useful part. How does MacFP communicate with the rest of the community? Where do you find your outputs? I know there are all workshops. I know there's a GitHub repository. Tell me, how did you come up with the GitHub? If I had to bet my money, i would say Randy.

Speaker 2:

Well, yes, randy McDermott from NIST is playing a key role here. He is being our leader in terms of the development of the GitHub repository and also developing initially MATLAB scripts or Python scripts to be able to do automatic comparisons between experimental data and computational results. We have also the assistant now of Isaac Leventon from NIST as well, who is doing similar work, but on the condensed phase subgroups side. But to go back to your question, we felt that to be successful, this is a community effort and to be successful it has to be working at the level of a neutral space. So it shouldn't be led by any of our institutions, it should be led by a scientific society. So we went to IFSS as a natural choice. Ifss is endorsing us. So the first place where you find information on MacFP is on the IFSS website, and there you have a description of what MacFP is about, even the history of MacFP, and then you have links to the database that is on the GitHub repository, and so you can find the selection of targets, experiments, information on these experiments, the information and the experimental data coming out of these experiments, as well as the simple computational results that were presented at the past two workshops on the GitHub repository. So the entrance to the MacFP information is on the IFSS website. And then we've advertised everything Every time we have. So we're going to have our workshop number three in Tsukuba in October 2023. But everything has been adversized for the biosafety journal, the biotechnology journal as well, and for the IFSS newsletter and for emails.

Speaker 1:

I will link to those resources, to the GitHub repository, to the MacFP website, and there are also very nice direct links to the previous workshops where you can find the presentations, you can find recorded talks, so it is truly a repository of the history of this. I guess that was the point like to turn a workshop, a nice place where you meet with people, exchange information and go home, into something that is protected, shared, and 10 years later we can access the discussions from Lund or from whatever other workshop you felt. They are also now held also between the IFSS conference in some online space.

Speaker 2:

Basically, we did have a couple of Zoom meetings to get people ready for the workshop, but the workshops are taking place with the IFSS symposium, so we have a three-year cycle. It's a lot of work that goes into that, so I'm not sure we can do more frequent meetings, but this is something we're considering. I want to go back to something you said. I think the MacFP now of you is a series of workshops. First, a forum where people experts are going to exchange information when newcomers can find the relevant information on that particular topic of model validation for fire applications. So it's a forum, but it's more than that, because traditional workshops are great. But to me, macfp is a series of workshops plus a record, the fact that we have a place where we're going to leave a trace of what was discussed and we're going to leave data. So the GitHub repository is very important, and so that's what we wanted, because we didn't want to. I like the traditional workshops that we have at the IFSS and they've been very successful. They provide very nice discussions. But to me, for a workshop to have a long-lasting value, we need to have not only these discussions taking place but also to leave a record, a trace of what has been discussed, or leave data and software behind so people can build on it.

Speaker 1:

I wondered is there any competition within the MacFP itself? Like, okay, you are a bunch of scientists, i know science is a competitive space. Come on, i'm a scientist myself And you tend to get inside the room talk about fundamental phenomena. How does it look inside?

Speaker 2:

So I think no, there's not much competition. We were very careful about trying to avoid having messages that are sent to committee, where there will be friction, where people are going to claim that my model is better than yours and things like this. So we want to stay away from this. We want to collectively identify progress and identify also knowledge, gaps, difficulties, and just candidly recognize it, and that's how you make progress. I think for this we benefited from the series of FM fire modeling workshops organized by FM Global Pre-pandemic. They were organized annually and they just started again in 2023. And that's a forum where, again, there are some similarities in the spirit to MacFP. That's a forum that brings together modelers and also some experimentalists to talk about the state of the art in terms of modeling, and most people in the room are people who are going to be working with FDS or with Fireform. There are also people who are using other tools, but most of them are using FDS or Fireform And initially, when the FM Global workshop series was started, we were a little bit concerned about possible competition between the FDS developers and the FM Global developers, but this in the end, has been we've stayed away from this and people have been bringing discussion at a level that is very neutral. I think also, again going back to the idea of putting MacFP under the umbrella of FSS was also a good move, just because this is a neutral space for people to exchange ideas and knowledge. And so, no surprisingly, we've done pretty well at having a very collegial tone and a very candid tone of recognizing where the difficulties are, and I think this is important. We don't want to hide, we don't want to you know the problems under the carpet and ignore them.

Speaker 1:

We want to bring them forward, you are creating a space where people are coming in with, let's say, the best experiments they could do. I would go to MacFP with my best experiment and a possible outcome is that you will get criticized Or you did not measure this or you did not do something that's necessary. In discussion, you can add up okay, this experiment is worthless for the purpose, like it's not a benchmark experiment, but it's inevitable. It's inevitable that this will happen and there's the whole point of the thing To work out experiments that are the benchmarks that if I take a document from the GitHub repository, i can be sure that this experiment is the best quality experiment on a certain physics that we do have at the moment. So it must be very nice and how to say it? safe space for researchers to discuss these things, because many people would be ashamed or worried about showcasing experiments that will be discussed further and that may need an improvement. I find this as a challenge actually.

Speaker 2:

Yeah, well, it's nice to have this space where people agree on, for example, on what should be characterized in these experiments, even though we recognize that typically the databases that are provided are going to only provide only a portion of the information that ideally you would like to have. I mean, the discussions between computational modelers and experimentalists is always very fruitful from that regard, because as a computational modeler I can always shoot for the stars and I can always say I would like to have an idealized experiment where the fuel mass loss rate is measured, the heat release rate is measured, heat fluxes are measured and I have access to information on temperature velocity, volume fraction, carbon dioxide and water vapor mole fraction at the very least. I mean, to me this is the way you characterize the flame structure And typically you go to experiments, you're going to have some of these measurements, but not all. So to me the set is always incomplete, but that's fine. I mean I can understand that my wish list is a little bit too chat. I have to recognize this. However, at the same time, you know you're setting the standards. You're saying if you are doing a flame spread experiments, a fire growth experiment, so your fire is going to grow in intensity one way or another. I think it is not acceptable today to have an experiment that doesn't measure heat fluxes somewhere. You need to be able to track what's happening, you need to be able to characterize it quantitatively, and the best equipment that we have for that is a heat flux gauge. So I'm saying this because I'm worried sometimes in the literature when I see a lot of experiments that are speculating on the fire growth or mechanisms for fire growth or flame spread without these measurements and just relying basically on flame imaging. So I think one of the things we are trying to do is also implicitly define the standards of the field. If you are serious about quantifying something about fire growth or flame spread, you need to give yourself the means really to measure it Or, if you are doing computations, the means to actually quantify it in your simulation and checking that you are accurate there. In this discussion we are trying to identify what would be an ideal experiment, also defining the standards of what should be measured for what kind of problems, and that applies both to experimentalists and to competition monitors. What does it take to really have an accurate simulation for heat flux gauge located at two meters from a flame? There are some requirements to do this, and so that's the kind of standards we're discussing.

Speaker 1:

I would love to take this to today, this example a little further. So let's say I'm running an experiment that I would like that one day becomes a benchmark for science. What tools are allowed? for example, if I measure it with plate thermometer, should I measure it with thin skin calorimeter? Should I measure it with Gordon Gouch? or should I measure it with Jim Squintery's secret heat flux meter that no one's loads to touch or even look too intensively at What level are you expecting at MacFP to allow to go through RANDY and to get help?

Speaker 2:

This is a little bit outside of my area of expertise. I'm relying on the experimentalist to answer that question What I want to see. I mean, i don't have any preference for which device to use to measure heat flux, but I want to see people doing a careful calibration And, from a computational modeling standpoint, i'd like to have access to both the convective and relative components of the heat flux. if I can, i would love people to measure, have two measurements instead of one, but for this the best way of doing it. I'm aware of these discussions are ongoing in the community of people doing experiments, but I don't have any strong preference. This is where I want them to converge and let me know what are the most reliable data. Similar problem occurs for suit volume fraction. There are different techniques available. Some have their advantages depending on the problem. You have non-intrusive techniques, but also it depends on whether your flame is optically thin or optically thick. Some of these techniques rely on the optically thin system. There are a number of choices that have to be made and there they see where I enjoy the discussion, but I'm sitting on the sidelines because that's not my area of expertise.

Speaker 1:

That must be the giggiest place in the fire science. I'm not going to join it. It sounds like what you've described sounds like fun to me. Anyway, let's follow up on what makes the experiment worthy, what makes an experiment that can really be a part of changing the fire science. How about some best practices for experiment? Maybe they're not there yet, maybe they're not yet vulnerable enough to go to my KFP and share their experiments, but still they would like to go to good fire science. What are the simple hacks and tricks that can immediately improve the quality of your experiments?

Speaker 2:

The first thing people have to have in mind if you want to study a benchmark experiment, you need to think about instrumentation. This approach of going back to benchmark experiment is also an approach where you are going to go back to configurations that have been studied in the past and you have to accept that, as opposed to studying new configurations of interest. If I look at my colleagues doing experimental work, there are people studying new configurations because they want to discover or characterize new phenomena, and that's a valid approach. but when you go to a benchmark, you're going to a well-known configuration and you are adding new instrumentation to reveal some new phenomena. So basically, you need to be in that mindset to be able to bring new diagnostics to this experiment. The best example of this is actually to me in the one on fire research community right now, where I see, first of all, as I said, this community doesn't necessarily have agreements on what a benchmark experiment is, but also when they do experiments, typically they don't measure much, and so even when you look at the press quad, fire burn and you are trying to characterize flame spread up to a very recent past basically, people would do a burn over many acres. It would be a very difficult experiment to organize because you need a lot of people on the ground for measurements or for safety, and then in the end what is extracted of that experiment was typically one value for mean or wet or spread in the direction of the wind or in the direction of the slope. So when you look at the amount of resources, the amount of effort that goes into organizing experiments, and you extract one value I'm a little bit simplifying things, but I think this is a fairly accurate description of where we were on till a recent past. Now these experiments are going to be monitored by imaging systems that are flying through drones or low-flying airplanes or helicopters and are going to bring spatial and temporal resolution on the fire line movement, so that now, instead of one value, you're going to have to be able to measure spatial and temporal variations of flame spread. You're going to be able to look at acceleration of the flame in certain regions, deceleration of the flame in others and try to correlate it with changes in the wind or changes in the fuel load or changes in the topography. So you're going to have insights into fire. you know wind-on-fire dynamics that you didn't have before. So back to your question. you can see that this is going to change the game. We're going to be able to see things we didn't see before, and the models are going to improve. Our understanding is going to be improved and then the models will improve and then the tools to predict what's happening there are going to improve. So the key here is really better diagnostics. Even these experiments may not be very well controlled, but the fact that you are bringing measurements with high temporal and spatial accuracy is going to change our vision on understanding of the problem. So my best answer to your question is really bring sophisticated measurements up to modern measurements techniques to these problems?

Speaker 1:

How much do we need to revisit the old experiments actually Now, when we have these highly improved research capabilities? I'll give you an example of what I'm doing. I have a research project together with Professor Lucas Arnold, who's also very involved in MacFP, and what we're trying to do is actually visit the GINs experiment at our Foundation for Visibility in Smoke Modeling. So we found that the amount of controversy, uncertainty around this model has a master level where we really feel we need to do it from zero. We need to redo this from scratch with modern diagnostics, because there are too many open-ended questions that are a significant contributor to the uncertainty of the model in the end. So we're just going to redo them. I wonder, like we have, let's say, stable experiments You know, sandia plume, helium plume experiments Or maybe other experiments that went into MacFP Do you see like a need to even now stepping back with all the new tools that we're obtaining every year after year, or there's enough to do ahead?

Speaker 2:

Yeah, well, i don't see any problem revisiting old benchmark experiments with new diagnostics and with new questions. I mean, typically what you want is also have new questions. Some of these experiments are still not fully understood. I mean, you know, even when I look at flame spread along the kind of large fuel packages, continuous fuel packages that we have in building fires, we still, you know, are not have a full understanding of the relative weight or convective and relative transfer. So there are still some fundamental questions that have implications for models and our ability to model this phenomena correctly, that are not fully answered. When you go to the wind and fires again, there I mean this is, you have flame spread along these continuous, discrete fuel packages And even though we understand the physics, the coupling between all these physical phenomena is so complex that we still don't have a full understanding of what controls flame spread in wind and fires. And so there's no problem in going back to benchmark configurations that look like they've been studied extensively but they are not fully understood, and go back to them. And you know we've done those six. I mean, if I take an analogy that is a bit daring and I look at. You know literature. We don't have any problem reading the classics okay And right before. But it's not because they haven't they have been right before and maybe you may have read them once that it's not worth reading them again. So I don't have any problem with that, as long as you can justify it. You should be able to say I'm going back to this problem because I want to understand this aspect of the physics, or I want to measure this quantity.

Speaker 1:

And how do you value repetition of experiments in fire science? Let's imagine I have an experiment in front of me. Actually I do have. Let's say I'm burning a 3D printed compartments and to check out how they behave in fire, what would you think is more interesting? to burn it three times in the exact same way, or burn it three times but, let's say, change the heat release rate or the alpha coefficient of fire growth or whatever else?

Speaker 2:

Well, yeah, i think this issue of repeatability of experiment is an important issue, but I usually, when you have problem of repeatability, that means you may not be doing this kind of well-controlled experiments that we like in MacFP. You are more doing like a test and maybe representative of the engineering problem, but it's not necessarily the kind of target that we have at MacFP.

Speaker 1:

I call them exploratory experiments.

Speaker 2:

Yeah, i'm a big believer in doing many experiments to explore the parameter space. I'm always worried that sometimes, when you focus on a single experiment and you may be actually looking at a comparison, it's not going to be perfect because our models are not perfect And you're going to waste some of your time and your resources focusing on one or two cases, as opposed to looking at basically maps or changing conditions and at least reproducing qualitatively transition from one fire regime to another in a large parameter space. The fact that you may not be exactly predicting exactly when the fire becomes on the ventilated compartment fire is not as important as making sure that you are going to correctly say that this fire has at least two regimes one is well ventilated, one is under ventilated And you know, roughly speaking, the transition occurs within a certain range of conditions. So I believe in many experiments.

Speaker 1:

I'll go further on that. What about the reproducibility of research? Every time I submit a paper, i get a reviewer to telling me oh, you need to include this and this so the experiment can be reproduced. But in fire science, does anyone ever do that? actually, like I wonder if MacFB yeah, you do that. On other scope of experience, i do listen a lot medical podcasts about longevity and stuff like that And in this research, these people from medical world, they would discuss a thing and they would say, oh yeah, this experiment has been done for 10 years on double blind, placebo controlled group And it has not been reproduced yet. So I don't know And I'm like, wow, this is mind breaking. And in fire science, i feel like everyone will just jump on this single answer, which is got from an experiment, as the ultimate truth. So what about reproducibility? Do you do this in a MacFP?

Speaker 2:

Well, we do it somewhat in our own way. I think if you look at MacFP, you know, typically we have between five and 10 different research groups doing computational modeling, contributing to the workshops, and these people are going to use their own tool, whether it's FDAS, fireform or some other codes, and they are going to simulate the same problem, As in the experiments that are reported in MacFP, for example. So they will, you know so typically each target experiments, we'd have three or four computational modeling groups simulating that particular experiment, and the fact that you find similar results which is often our experience when people use this of course people use the same kind of models, they apply the same kind of computational grid is a kind of sanity check, a check that the community can simulate in the same way. whatever the tool you're using, it can get the same result for the same problem. So it's a way to check on reproducibility. So again, the idea of MacFP is to do collective progress and to do this test where we can see that different tools are producing the same results.

Speaker 1:

I think what you just mentioned also goes in line with the previous comments about the competitiveness of MacFP. Like if this is a group exercise as a part of a larger group of people doing this for the exact same purpose to progress science altogether and many people are doing the exact same exercise and then compare and find joy in finding that all of them come to similar reasons, which means that physics is good. We can proceed with our lives. I guess this has a lot of value and also takes down this competitiveness factor. You said you'd have 9, 10 groups in computation. That's the question I had listened on my list How big the space of measurements and computation in Fire Phenomena really is. How many of us are there? How many groups are roughly are there?

Speaker 2:

I don't have the number of modeling groups. The modeling groups, I think at the last at the MacFP2, at the second workshop that was virtual and that was in 2021. I think we had 9 modeling groups if I remember correctly. If you look at the number of experimental groups who are contributing to MacFP, this number is probably smaller. We have on the order of, I would say, somewhere between 5 and 10 target experiments now, But a lot of them are taken from the past. We had experiments taking place at San Diego National Laboratories, for example, or some famous experiments on helium plumes or pool fires from FM or hydrogen, But we have right now, I would say, basically, NIST is very involved and FM Global is very involved in producing quality experiments and raising data. I think we rely today mostly on the contributions from NIST and from FM Global. I should add, actually, University of Maryland too. The experiments I have in mind is a pool fire experiment from Anthony Hammins at NIST. We have also burner experiments with extinction from Dong Zeng at FM Global. We have now PMMA flame spread experiments from Isaac Levanton at NIST and from Stastolja at Maryland. This is what comes to my mind. I hope I'm not branding anybody.

Speaker 1:

No, it's just….

Speaker 2:

So that gives you an idea. We have basically three institutions producing data right now, i think for my QP.

Speaker 1:

I need an urgent clarification. What's a target experiment?

Speaker 2:

So a target experiment is an experiment that we collectively review and that we think is going to be a target for CFD model validation and is part of the list of experiments that we're going to study as part of my QP.

Speaker 1:

So basically you sit down and say, okay, this is the type of experiment we would need for CFD validation. And then what one group does it? Three groups do the same experiment. Compared to results.

Speaker 2:

Yeah, so we are in the process now of inviting…. We have identified which experiments we want to talk about at MacFP3 in October in Tsukuba, and so we are inviting modeling groups right now to go and simulate some of them, and so we are waiting on a volunteer basis. So, waiting people decide to let us know which experiments they are going to simulate.

Speaker 1:

Is there any interest from combustion part of the world? I mean all the non-promixed flame people and turbo flames.

Speaker 2:

Yes, we've seen a few people from the combustion science community getting interested in participating in fire research. Sometimes it's because they think maybe there will be some funding in fire research that may be available. Okay, but it is also this idea that the community is becoming a little bit more structured and so they can also more easily understand what is the set of the art. I think MacFP contributes to being a window to the outside also and so that's a point of entry for people from outside of our fire research community And then they can easily see what the set of the art is and they decide to contribute. We've seen people coming from combustion science community, especially the IDFM Global Workshop on Fire Modeling, and some of them have come back several times.

Speaker 1:

Fantastic. Arnaud. thank you very much for this very interesting talk. Maybe let's close with a statement What's the most important part about organizing MacFP from your perspective as a chair? What's the number one thing in there?

Speaker 2:

Well, so I'm going to put number one A and number one B.

Speaker 1:

You already made it structured. I like this.

Speaker 2:

So one A, i think it's just the people, the bringing people together in a collective framework, a collegial framework where we can have these discussions that help everyone in the long run. Okay, and one B would be that not only we discuss, but we produce data, we produce knowledge, we try to leave a record of what you are doing So everybody can benefit from it.

Speaker 1:

Arnaud, thank you very much for joining me in the Fire Science Show. It was a pleasure And I'm looking forward to your future at EMD and what you will bring to the faculty That's so exciting. After this episode, I'm sure it's going to be a lot of very high quality computations and well-planned experiments.

Speaker 2:

Thank you, vojie. Thank you for the invitation. I'm a big fan of your podcast and, of course, i'm more of a big fan now that you invited me. I was a big fan before.

Speaker 1:

Thank you And everyone feel invited to the MacFP workshops in Tsukuba in Japan. They will happen on Sunday, just before the conference. There are three workshops planned. There will be a poster session in between them. It's going to be full of great people and interesting fire physics. So if anyone wishes to see this firsthand, i guess they're welcome to sign in, right.

Speaker 2:

Oh yeah, everyone is welcome. It's a very open forum, thank you.

Speaker 1:

And that's it. Thank you very much, arnaud. This is not yet experiments that change fire science, but experiments that have a pretty good chance to change the fire science or at least become the useful benchmark tools for us to proceed with fire science. I appreciate the efforts of everyone involved in MacFP a lot. It was the first standing committee of IFSS and more followed Now we have large outdoor fires groups and we have human behavior in fire groups, so this is certainly growing and we have more structured committees that you can participate in. All of these are happening all together on Sunday before the Tsukuba IFSS conference in October. So if you would like to be involved, check out the conference website. There should be soon registration form and soon you should be able to register to the conference and workshops, see what's available And I can assure you this workshops will be great. We're working very hard to make workshops in Tsukuba a very nice experience for everyone. We have a great choice of workshops classical workshops on Saturday and a mix of standing committees and classical workshops on Sunday. A lot of them to choose from. So I hope you'll find something for you. And that would be it for today's episode. I hope that if you're running fire science experiments. This was very useful to you. If you're not yet running fire experiments, i hope it was at least interesting. And if you are a modular, i think it's fundamental to understand where the science and knowledge comes from. So thank you for listening to the Fire Science Show today and see you here next Wednesday. Bye.