Environment Variables
How does AI and ML Impact Climate Change?
June 13, 2022
This week Chris Adams takes over the reins from Asim Hussain to discuss how does artificial intelligence and machine learning impact climate change. He is joined by Will Buchanan of Azure ML (Microsoft), Abhishek Gupta; the chair of the Standards Working Group for the Green Software Foundation and Lynn Kaack; assistant professor at the Hertie School in Berlin. They discuss boundaries, Jevon’s paradox, the EU AI Act, inferencing and supply us with a plethora of materials regarding ML and AI and the climate!
This week Chris Adams takes over the reins from Asim Hussain to discuss how does artificial intelligence and machine learning impact climate change. He is joined by Will Buchanan of Azure ML (Microsoft), Abhishek Gupta; the chair of the Standards Working Group for the Green Software Foundation and Lynn Kaack; assistant professor at the Hertie School in Berlin. They discuss boundaries, Jevon’s paradox, the EU AI Act, inferencing and supply us with a plethora of materials regarding ML and AI and the climate!

Learn more about our guests:


Episode resources:


If you enjoyed this episode then please either:


Transcript Below:

Abhishek Gupta: We're not just doing all of this accounting to produce reports and to, you know, spill ink, but it's to concretely drive change in behavior. And this was coming from folks who are a part of the standards working group, including will and myself who are practitioners who are itching to get something that helps this change.

Our behavior change our team's behaviors when it comes to building greener software.

Asim Hussain: Hello, and welcome to Environment Variables brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host. Assim Hussain.

Chris Adams: Hello there and welcome to the Environment Variables podcast. The podcast about green software. I'm Chris Adams filling in for Asim Hussain, the regular host while he's on paternity leave, with a brand new baby! I met Asim on climateaction.tech, an online community for climate aware, techies. And I work for the green web foundation where we work towards a fossil free internet by 2030, as well as working as the co-chair for the green software foundations policy group.

Today, we're talking about climate change, AI and green software, and I'm joined by Lynn, Will and Abhishek.

Will Buchanan: Thanks for having me. My name is will. I'm a product manager on the Azure machine learning team. I'm also a member of the green software foundation standards and innovation working group within Microsoft. I foster the green AI community, which now has a few hundred members and I'm also a climate activist.

That's focused on pragmatic solutions to complex environmental issues. Recently I shipped energy consumption metrics within Azure machine learning, and we are about to publish a paper titled measuring carbon intensity of AI in cloud instances, which I think we'll touch on today.

Abhishek Gupta: Well, thanks for having me. I'm Abhishek Gupta, I'm the founder and principal researcher at the Montreal ethics Institute. I also work as a senior responsible AI leader and expert at the Boston consulting group, BCG and I serve as the chair for the standards working group at the Green Software Foundation. So I've got a few hats on there.

Most of my work. As it relates to what we're gonna talk about. Runs at the intersection of responsible AI and green software in particular what's offer interest to me is looking at how the intersections of social responsibility, the environmental impacts of software systems in particular AI systems.

Can be thought about when we're looking to make a positive impact on the world while using technology in a responsible fashion. I also, as a part of the green software foundation, help through the standards working group, come up with the software carbon intensity specification, where we are trying to create an actionable way for developers and consumers or software systems to better assess and, and mitigate the environmental impacts of their work.

Chris Adams: Okay. And Lynn last but not least joining us from Berlin. Thank you very much for joining us.

Lynn Kaack: Yeah, thank you so much. I am an assistant professor at a policy school, public policy school called Hertie School in Berlin. And I am also a co-founder and a chair of an organization called climate change. AI. And with climate change AI, we facilitate work at the intersection of machine learning and different kinds of climate domains, focusing on climate change, mitigation and adaptation.

And in my work, in my research, I am looking at how we can use machine learning as a tool to address different problems related to energy and climate policy. And I'm also interested in the policy of AI and climate. And today, actually, since we're talking about papers, I have a paper coming up that is called aligning artificial intelligence with climate change mitigation, where we look at the different impacts from machine learning and how they affect greenhouse gas emissions.

Chris Adams: Awesome. So we actually have some decent deep domain expertise and I'll try to keep this quite. But we might drop into a little bit of like data science nerd around here, but the podcast has done that previously and it turns out to be something that we've got some decent feedback from because there aren't that many podcasts covering.

Okay. So let's, uh, get into this topic of green AI and climate change. As we know, it is a significant driver of emissions in its own, right? When we think about other climate crisis this year, the IPCC, which is the intergovernmental panel on climate change in their big reports, which synthesized literally thousands of papers explicitly called out digital as a thing we, we should be talking about and thinking about, and if you're a responsible technologist, it seems like a thing that we should be taking into account here.

Now I found it helpful to think about it, uh, a little bit like how we think about the shipping industry, partly because there is similar in terms of emissions, which is around between one and 3%, depending what you look at it. But also in that both of these acts like kind of connective tissue for society.

We also think of it as a kind of force multiplier for the existing forms of activity. So if you use it, which is in line with the recommendations of the science, that's a good thing. But if you use. To do something which is kind of rejecting some of the sites. It might not be such a good thing. And within technology, AI and machine learning particular is one of the fastest growing sectors and often seen as one of the biggest levers of all.

So we're gonna highlight some interesting projects we'll start off with, and outta that, we'll probably dive into some specifics about that or some of the things you might wanna take into account. If you're a technologist wanting to. Incorporate an awareness of climate into how you work and build greener software.

Then finally we'll hopefully leave you with some actionable tips and techniques or projects that you may contribute to or use in your daily practice. There's another term that we might be touching on here when you're making AI greener and that's specifically green AI. Is that the case? Well,

Will Buchanan: Correct. And that actually was coined by, uh, researchers a few years ago. Uh, Roy Schwartz, Jesse Dodge, uh, and it's really focused on making the development of the AI system itself more sustainable and it's to be dis Abid for. On the term using AI for sustainability.

Chris Adams: Okay. So that's something we'll touch on both today. We'll talk about some of the external impacts and some of the internal impacts. We're gonna start with something quite easy first, because, well, why not? I'm just gonna ask each of the people here to. Kind of point to maybe one project that they've seen, that's using ML in quite an interesting fashion to ideally come up with some kind of measurable win.

Well, if there was one project you'd actually look to that you think is kind of embodying these ideas of like green AI, AI or something, which is really helping us essentially face some of the challenges. Maybe you could tell us about what's catching your right at the moment. What you'd look.

Will Buchanan: been thinking a lot about natural capital recently. And I learned about a startup called Pachama, which combines remote sensing data with machine learning to help measure. Monitor the carbon stored in a forest. I think it's really, really valuable because they're providing verification and insurance of carbon credits at scale.

And they've protected about a million hectares of forest. I think that's really when you have IT and remote sensing and machine learning combining to help nature restore itself.

Chris Adams: Okay. Cool. So if I understand that they're using satellites to basically track forests and sit and track deforestation, is that the idea that they're doing?

Will Buchanan: Yes. And also to verify the amount of carbon that a forest can sequester.

Chris Adams: Okay. Cool. All right. I know there's a few other projects related to this. If I just hand over to Abhishek, can you let us know. What's caught your eyes recently and then we'll see what other projects come out of this.

Abhishek Gupta: absolutely. I think one of the projects, and I don't know. I mean, if it, what the. Impact has been so far. In fact, it's. It's something that's come out of MILA, which is, or, you know, called the Montreal Institute for learning algorithms, which is Dr. Benji's lab in, in Montreal. In fact, one of the people who led that project as a part of climate change, AI as well, who I'm sure Lynn can talk more about too, which is SASA.

And she's done this project called this climate does not exist, which I think was a fascinating use of machine learning to visualize the impact climate change will have on. You know, places around you in, in a very arresting and, and visually capturing fashion, which I think when we think about what impact climate change is going to have around us, sometimes it, it feels quite distant because it's a, it's a slow rolling thing that's coming our way.

And this. Puts it in, in, in a way that's quite immediate, quite visually arresting. And I think stores people to action. I, as I said, I'm, I'm not sure what the measurable impact of that has been yet, but I, I certainly feel that those are the kinds of creative users of AI. When we want to galvanize people into action around climate change.

Lynn Kaack: happy to also talk about an application, which is also kind of difficult in terms of measuring impact, but I think it's. Another interesting component of what AI can do. And this is something that at the Austin Institute of Technology do on a project called Infrared and they use machine learning to help design new districts and cities.

And especially at the moment in many countries, a lot of. New urban districts are being built and how we build these has a huge impact on energy consumption in cities, both in terms of transportation, but also how buildings are heated or cooled. And by the use of machine learning, they can drastically improve design choices because now they can approximate their very computationally heavy models and run them much faster, which means that they can also have more runs and can try out more design configurations.

So this is. Rather indirect application, but it has huge implications also on emissions for, for many decades to.

Chris Adams: Using kind of housing policy as climate policy there, because there's just a huge amount of emissions built into how people live and whether they need to drive everywhere in a car and stuff like that. That's, that's some of the stuff that it's doing and making that part.

Lynn Kaack: So it's not really looking at housing policy, but it's looking at how districts are designed. So they take. The group of, of houses, like if the, a new district is to be built and then they simulate the wind flow going through these cities, which are very expensive simulation models. And then they take the outputs of their model and approximate it with the machine learning model, which makes it much, much faster.

So from hours or days, you go to milliseconds or below seconds for one run, and then you can try out different design configurations and understand better how. The build infrastructure affects natural cooling. For example, in cities or walkability energy impacts generally of the micro climate on, on the build environment.

Chris Adams: I had no idea that. It was actually possible. That's really, really cool.

Will Buchanan: That's very cool. That's similar to generative design.

Chris Adams: Generative design. This is the phrase I haven't heard actually will. Maybe you could elucidate or share something there actually.

Will Buchanan: It's similar to some software that Autodesk has built, where you can try out many different iterations of a design and come up with optimum solutions. I think what's really cool that you're just consolidating it and running these models more efficiently.

Chris Adams: Okay, cool. And that's a bit like following, say a fitness function saying I've have a chair or, you know, I wanna have something works like a chair and needs four legs and a seating pattern. And then it essentially comes up with some of the designs or iterates through some of the PO possibilities, something like that.

Will Buchanan: Exactly.

Chris Adams: Oh, wow. Okay. That's cool. All right then. So we've spoken about AI and there's a few exciting, interesting ones that we can add into the show notes and list from, and for people to look into and see how that might relate to what they do. I suppose I wanted to ask a little bit about measuring impact from these projects because.

There's quite a few different ways that you can actually measure impact here. And in many times it can be quite a difficult thing to kind of pin down. And this is continually thing that's come up. When I know that people have tried to come up with specs like the software carbon intensity, and I'm Sureek, you've had some experiences here will, you've mentioned a little bit about.

Actually measuring impact internally. And it sounds like you've just had to do a bunch of this work on the ML team right now and exposes some of these numbers to people consuming these services in the first place. Could you talk about some of that part a bit, perhaps?

Will Buchanan: Certainly. And so, as I mentioned, we have shipped energy consumption metrics for both training and inference within Azure machine learning. And that's really complex when you think of the infrastructure required to just report that. That doesn't necessarily account for the additional power that's consumed in the data center, such as the idle power for devices or for the utilization of your servers.

So there's so many different factors there. So you really, you could encounter scope creep when you come to your measurement methodology. So it's really necessary to put boundaries around that.

Chris Adams: Okay. And when you use the term boundaries here, you are saying I'm gonna measure the environmental impact of the servers, but not the environment impact of building the building to put the servers in. Is that the idea of when you're referring to a boundary?

Will Buchanan: Yes, that's great.

Chris Adams: Okay. Alright. I think this is actually something we've come across quite a few times in other places as well, actually, because maybe it's worth asking about this kind of boundary issue that we have had here, because automatically that sounds complicated here.

And I know that Abhishek you've had some issues at your end as well with defining this staff for deciding, deciding what's in or out, because I think this is one thing that we've had to explicitly do for the software carbon intensity spec. Right?

Abhishek Gupta: Exactly. And, and I think when we talk about boundaries, it's, it's, it's trying to get a sense for what are the actual pieces that are consumed, right? From an operational standpoint, from an embodied ion standpoint and how you make those, you know, allocations across. You know, what, what your system is consuming.

And I use the word system because I think again, when we talk about software, we're not just talking about a specific piece, but we're talking about really everything that it touches be that, you know, network be that bandwidth consumption, be that, you know, as, as will was saying idle power, even when we are looking at cloud computing, it becomes even more complicated when you.

Your pieces of software that are sharing tendency across the pieces of hardware and how different consumers are perhaps sharing that piece of hardware with you and, and thinking about whether you've booked the resource ahead of time or not, whether it's hot or cold in terms of its availability and what implications that has.

I mean, there are so many different facets to it. And each of those decisions, what I wanna highlight here is. That it comes with a trade off, right? So we also don't have any standards in terms of how we should go about measuring that and what should be included, what should be included. And so the way people report out these numbers today also doesn't really make it actionable for folks who are consuming or who want to consume these reports, these metrics in, in taking decisions as to, you know, whether something is green or not.

And I think that's one of the places that the software carbon intensity specification is trying to help folks. Is to help standardize it first and foremost, but also to make it actionable so that if you are someone who's environmentally conscious, you can make the right choice by being informed about what the actual impacts are.

Chris Adams: This is a question that I'm curious about here. Cause so far we've only been speaking internally about, okay, what is the environmental impact of it itself? Like its direct emissions, but the assumption that I have here is that there are ways we might talk. About the impact that it has on the outside world, in terms of what activity we're speeding up or accelerating or supporting there, is that the only issue that we need to think about?

Or are there any other things to take into account about like this system boundary part that we've just been talking about?

Lynn Kaack: Yeah. So these system effects are really important to look at and to consider, maybe just to give an example, like if you use machine learning in, let's say the oil and gas sector to make. Small parts of the operations, more energy efficient that, and the first site looks like something that could be considered sustainable and green, but you also have to realize that often then you are reducing costs as well.

And that might change the way that oil and gas in this particular example is competitive for the particular company is competitive. And that actually might shift also how much oil and gas we are able to use in the short run, how the prices change. So. Indirect effects can actually then have much larger impacts than the immediate effects of such an application.

So drawing boundaries is really important and also opening this up to, to have the broader system level view, and really try to understand how does the technology also change then than to larger consumption and, and production patterns. It's important.

Chris Adams: if I understand that correctly, that's talking almost like the consequences of an intervention that we might make here. So even though we might have reduced the emissions of. The drilling part by putting a wind turbine on an oil rig, for example, that might change the economics and make people more likely to use oil.

In which many cases they might burn, for example, or stuff like that, is that basically what you're saying?

Lynn Kaack: Yeah, essentially what I'm saying is that efficiency improvements in particular, and often they can be done with data science or with machine learning or AI systems. They often come with cost reductions and then those cost reductions do something and change something. And often this is also considered under rebound effects, but it's not only rebound effects.

So it's systemic. The system level impacts that come from more small scale applications that need to be considered.

Will Buchanan: That's such a good point. And I think I've also heard it called J's paradox.

Chris Adams: Yes, J's paradox. This is stuff from like the 1800s with steam engines, right? Like my understanding of the J's paradox was back when people had steam engines and they made steam. More efficient. This led to people basically burning more coal because it suddenly became more accessible to more people.

And you ended up using an integrated number of factories. So there's a kind of rebound, I think that we need to take into account. This is something I think has been quite difficult to actually capture with existing ways of tracking the environmental impact of particular projects. We have like an idea of say an attribution based approach and a consequence based approach.

And maybe it's worth actually talking about here about how. Some of the complexities we might need to wrestle with when you're designing a system here. I mean, Abhishek, I think this was one of the things that was an early decision with the software carbon intensity part to not try to have an attribution approach versus a marginal approach.

And if we're not diving too deeply into jargon here, maybe you might be able to kind of share a bit more information on that part there, because it sounds like it's worth expanding or explaining to people to the audience a bit better.

Abhishek Gupta: Indeed. You know, the reason for making that choice was, again, our emphasis on being action oriented. Right? So as we had started to develop the software carbon intensity specification, One of the early debates that we had to wrestle with and, and, you know, will, and will was of course a crucial part of that as well as were the folks who were a part of the standards working group was figuring out how, for example, the G G way of going about doing that, you know, accounting doesn't really translate all that well for software systems and how perhaps adopting a, a slightly different approach would lead to more.

More actionable outcomes for the folks who want to use this ultimately to change behavior because. You know, without getting into specifics of, you know, what marginal is and what consequential approaches are. And, and if we want I'm, I'm sure, you know, we would, would be happy to dive into all of those details as would I.

But the thing that we were seeing was that we we're doing all of this great work around, you know, talking about scope 1, 2, 3 emissions, et cetera, but it's not really helping to drive behavior change. And that's really it. The crux of all of this, right? Is that we're not just doing all of this accounting to produce reports and to, you know, spill in, but it's to concretely drive.

Change in behavior. And that's where we found that adopting a consequential adopting marginal approach actually helped make it more actionable. And this was coming from folks who are a part of the standards working group, including Will and myself who are practitioners, who, who are itching to get something that helps us change our behavior, change our team's behaviors when it comes to building greener software broadly speaking.

Chris Adams: So that helps with explaining the difference between a consequential approach and a marginal approach. As in the consequences of me building this thing will mean that this is more likely to happen. And if I understand it, the GSG protocol that you mentioned, which is the greenhouse gas protocol and this scoped emissions approach, this is the kind of standard way that an organization might report.

It's kind of climate responsibility as it were when, and when you say scoped emissions, that's like scope one, which is burning. Say that's emissions from fossil fuels, burned on site or in your car. For example, scope two is electricity and scope three is your supply chain. If I understand what you're saying, there's like a kind of gap there that doesn't account for.

The impacts of this, perhaps. I mean, as some people who've referred to this as scope zero or scope four, which might be, what are the impacts an organization is happening to essentially that we mentioned before, do something around this systemic change. Or as Lynn mentioned, like this is changing the price of a particular commodity to make it more likely to be used or less likely to be used.

And this is what I understand. The S St is actually trying to do, it's trying to address some of this consequential approach because the current approach doesn't capture all of the. Impacts an organization might actually have at the moment. Right.

Will Buchanan: That's a good summary. One challenge that I have noticed is that until it's required in reporting structures, like the greenhouse gas protocol, then organizations don't have an incentive to really take. Action that they need to avoid climate disaster. It's something I encounter on a daily basis. And I think broadly, we need to bring this into the public discourse.

Chris Adams: I think you're right. I think it's worth it actually, Lynn, I think that when I've seen some of the work that you've done, you've done previously, this is something that's come into. Some of the briefings that I think that you've shared previously with climate change, I eight work and some of the policy briefings for governments as well.

Is there something that you might better add on here?

Lynn Kaack: Yeah. So something that comes to mind is for example, like a concrete legislation that's currently being developed is the EU AI act. And that's a place where for the first time AI systems are being regulated. also that scale and climate change almost didn't play a role for that regulation in the first draft.

So here it's also really evident that if we don't write in climate change now as a criterion for evaluating AI systems, it will probably be ignored for the next few years to come. So the way that legislation works is by classifying certain AI systems as high risk, and also just outright banning some other systems, but as high risk systems, Could as the original legislation stood, weren't really explicitly classified as high risk, even if they had a huge environmental or climate change impact.

And that's something that I talked about a lot with policy makers and trying to encourage them to more explicitly make environmental factors in climate change effective for evaluating my. So that'd be a very concrete case where making climate change more explicit in the AI context is important also in terms of legislation.

Abhishek Gupta: There's, there's a lot. Said about the EU AI act. Right. And, and, and a ton of in has been spelled everywhere. I think as, as you know, it's, it's, it's, it's called the Brussels effect for a reason, right. Where the I, whatever happens in the EU is taken as gospel and, and, and sort of. Spread across the world, which I think has already, Lynn has pointed out there.

It's not, it's not perfect. Right? I think one of the things that I've seen being particularly problematic is the rigid categorization of what, you know, high risk use cases are. And, and whether the EEO AI act, as we'll see, hopefully with some, you know, revisions that are coming down the pipe is whether.

We'll have the ability to add new categories and, and, and not just update subcategories within the existing, existing identified high risk categories. And I think that's where things like considerations for environmental impacts and really tying that to this. You know, societal impacts of AI, where we're talking about bias privacy and all the other areas is going to be particularly important because we need multiple levers to, to try to account for or to push on getting people, to consider the environmental impacts.

And given that there is such a great momentum already in terms of privacy considerations, bias considerations, I think now is the time where we really push hard. To make environmental considerations, an equally first class citizen, when it comes to, you know, thinking about the societal impacts of AI.

Will Buchanan: This is something I'm incredibly passionate about. I. It needs to encompass the full scope of harms that are caused by an AI system. So that could be the hidden environmental impacts of either the development or the application. The application could vastly outweigh the good that you're doing. Even just expanding oil and gas production by a certain percentage amount.

I think it just must account for all of the harms for both the ecosystems and people.

Chris Adams: thing. Does this category. Actually include this stuff right now. What counts as like a high risk use case? For example, when, when mentioned.

Lynn Kaack: I haven't seen the latest iteration. I think there has been some update on, there's been a lot of feedback on the version that was published. In April last year, I haven't seen the latest iteration. I think a lot of things have changed in yeah. In the first version, there was high risk systems where, when, uh, those that affect personal safety, like human rights in a sense of, of personal wellbeing, but the completely overlooked environmental protection aspects of human rights.

Chris Adams: Wow. That's quite a large one, especially when you take into account the human rights. Okay. We've spoken about the external impact, but I am led to believe there is also an internal impact from this as well. Like the AI has, has some direct impact. That we might wanna talk about as well as I understand it, we spoke about two to 3% of emissions here, but if we know there's an external impact, why would we care about any of the internal impacts of AI at all, really here, what we might be doing or why we might wanna care about the internal impacts of AI as well, example like the direct emissions.

Will Buchanan: So by direct emissions. you're talking about, let's say the scope, two of the operational costs of the model.

Chris Adams: Yeah, there'll be things that we have, there's an external impact. Or there is a, we use this phrase scope four, for example, to talk about all the other things that induce in, in, in the world. But there is a kind of stuff which happens inside the system boundary that we've spoken about. And presumably that's something we should be caring about as well.

Right. So there'll be steps that we can take to make the, the use of AI, particularly like say the, the model is more efficient and more effective and, or all these parts here. This is something that we should be looking at as well, presumably. Right.

Will Buchanan: And so in our paper, which Is

going to be published, I think on Monday, we've calculated the emissions of several different models. And one of them was a 6 billion parameter transformer model and the operational carbon footprint was equivalent to about a railcar of coal. And that's just for training . So it's really imperative that we address this and provide transparency around this.

Lynn Kaack: that for developing a model or for training at once? I mean, is that with grid search architecture, search.

Will Buchanan: For a single training run. So it does not account for sweeps or deployment.

Chris Adams: right. So there's a, there's some language that we haven't heard for here, so, but maybe it might be worth it. maybe will, could you maybe talk about just briefly, you said a rail car full of coal. I don't actually know what that is. I mean, in metric terms, what does that look like? Okay.

Will Buchanan: A hundred million grams. I don't have the conversion handy, but we took the US EPA greenhouse cast equivalencies, and I should add the methodology that we applied was the green software Foundation's SCI. So we calculated the energy consumed by the model and multiplied it, multiplied it by the carbon intensity of the grid that powers that data center.

Chris Adams: Cool. And that was per training run? So that wasn't the, in the, the creation of the entire model, is that correct?

Will Buchanan: correct.

Abhishek Gupta: That's the other interesting part as well, right? When you're thinking about the life cycle is or life cycle of the model, I should say, because life cycle has multiple meanings here, which is that once that model's out there, what are the inference costs? Right. And are we, are we, if, if, if this is something that's gonna be used.

You know, hundreds, thousands, tens of thousands of times, if it's something, you know, if it's, if it's a large model that's, you know, now being used as a pre-train model and is going to be fine tuned on by, by other folks downstream. Are we able to then, you know, talk about amortization of that cost across all of those use cases.

And again, Again, I think what becomes interesting and, and is, is how do we account for that stuff as well? Right? Because we don't have complete visibility on that as well. And, and I know Lynn's nodding here because her paper that's, I think coming out, getting released in an hour and a half, actually the embargo gets lifted on our paper, actually talks about some of those system level impacts.

And maybe, maybe learn you wanna chime in and talk a little bit about that as well.

Lynn Kaack: Yeah, thank you so much. Exactly. So I think what's a crucial number that we're currently still missing is not what is emitted from a single model in a well known setting. But what is emitted overall from applying machine learning? So what are the usage patterns and practices like how often do people develop models from scratch?

How often do they train or retrain them? People? I mean, of course organizations and typically larger organizations and companies. And how do they perform inference on how much data, how frequently. There are some numbers out there from Facebook and Google and in their large scale applications actually inference outweighs their training and development costs in terms of greenhouse gas emissions.

So inference might become a bigger share depending on the application. So we really need to understand better how machine learning is being used in practice. Also to understand the direct emissions that come from.

Chris Adams: An inference is a use of a model once it's in the wild. Is that what an inference is in this case? So there's an environment. So you could think of the making part and then there is the usage part from the inference, right? So is that how that part works?

Lynn Kaack: exactly. So if you use a model on a data point, we call that inference. So you've fed the data and given you a result. Then training means you. Sort of train a single configuration of the model once on your training data set. And then development is what I refer to as if you search over different configurations of the model.

So there are lots of hyper parameters that you can use. Adjust to achieve better performance. And if new models are being developed, then there's an extensive search over those hyper parameters and architecture configurations. That then of course gets really energy intensive because we are training the model thousands of times, essentially.

So

Will Buchanan: really. Me, I think Nvidia posted on their blog that referencing accounts for about 80 to 90% of the carbon costs of a model. And I think Lynn, in one of your papers, it was Amazon had also claimed around 90%. So these are really non-trivial costs and I'm not aware of any framework to measure this.

Lynn Kaack: That Amazon number just to be clear is costs. So monetary costs that came from a talk, but there are numbers now published. Google and Facebook, but they look at some applications of theirs where inference outweighs training in terms of energy consumption. They're not exact numbers. It's not entirely clear which applications those are, but there is some data, at least that shows that.

And I think it just highly depends on the application that you're looking at. And sometimes, you know, you build a model and then you do inference once and you have the data set that you, and then in other types, you build a model and then you apply it a billion times. so of course that can then add up to a lot more energy consumption.

Chris Adams: Wow. I didn't realize that was actually an issue cuz most of the numbers I've seen have been focusing on the training part. So, well I think this is something we spoke about before that training. There's there's, there's a kind of trend in the use in. Use from, from training already. Is this something, cuz I've seen figures from open AI and, but my assumption was that basically computers are journey getting more efficient about twice as efficient every two years or so with like Moore's law or kumis law or things like that.

But if you are seeing an uptick in usage here, is, does that mean that they're staying about the same or is there, is there, is there a trend that we should be taking into account?

Will Buchanan: So I think the computational costs of training have been doubling every 3.4 months or so. And so I think the trend is only accelerating. The models are just getting larger and larger and you've got, I think GT three is one of the largest. Ones around at this point, I think we might challenge Moore's law.

Chris Adams: Okay. So if Moore's law is doubling, once every two years, I mean, what is the impact of doubling every 3.4 months? I mean, over a few years, what does that work out to be? Because I don't think I could do the exponential numbers, the exponential math, but it sounds like it's, it sounds like a pretty big number, basically dub if something is doubling on a, every three or four months, right.

Will Buchanan: I also don't have the math handy, but I think it's important to note here and Abak was talking about this earlier. These models are very flexible, So,

you can train them once and then provide some fine tuning or transfer learning approach on top of them, and then repurpose these models for a number of different applications.

And then you can even compress them. Let's say, using OnX runtime, so you can be very efficient and you can really amortize the cost of that.

Abhishek Gupta: Yeah, just building on Will’s point there's a lot of work on quantizing the weights of a trained network, applying distillation approaches using. And model approaches that actually helps to shrink down the model quite a bit, especially with the whole push for tiny ML, trying to shrink down models so that they can be deployed on the on edge devices has been something that's helped to manage to, to a great extent the, the, the computational impacts. One of the other things that I wanted to highlight as, as you know, will was talking about Mo models getting larger is there's this almost fetish. In the world today to continuously scale and keep pushing forever larger models in, in, in chasing soda as, as they would say.

So chasing state of the art, you know, which is, is great for academic publications, where you get to show, Hey, I improve state of the art performance on this benchmark data set by 0.5% of whatever. Right. And in performance, I think what what's being ignored is that. That has a tremendous, tremendous computational cost.

In fact, one of the hidden costs that I think doesn't get talked about enough is there's this statistic out there that, you know, 90% of the models don't make it into production. And that kind of relates to things like, you know, neural architecture search and, you know, hyper parameter tuning, where you're constantly trying to refine a model to achieve better performance.

A lot of that actually goes to waste. Because that stuff doesn't make it into production. So it's actually not even used. And so there's a whole bunch of computational expenditure that is done that actually never sees the light of day and never becomes useful. That obviously has environmental impacts, right?

Because of the operational and embodied carbon impacts. But none of that actually gets talked about, reported, documented anywhere because, well, who wants to know that, Hey, I trained, you know, 73. Different, you know, combinations to get to where I'm at. You just talk about the final results.

Chris Adams: Okay. Let's say that if you don't wanna go down one of those rabbit holes, what should you be using? Or where would you start if you wanted to start applying some of these ideas about greener AI in your work on a daily basis, do not have anything that they would lead with. For example,

Will Buchanan: is not always better. Sometimes you really should choose the right tool for the job. We've had some really great graduate student projects. University of Washington's information school and they built some case studies and samples around green AI. As an example, a project led by Daniel Chen was comparing a sparse or a dense model for an anomaly detection setting.

And they found that using sparse meaning less trees and a shallow being smaller depth per tree, random forest would save a massive amount of carbon and provide the equivalent accuracy. So I think it saved about 98%. In terms of the monetary cost and energy.

Chris Adams: Okay. Uh, wow. That's bigger than I was expecting. What would you say to people if they're in production, they're trying to do something.

Lynn Kaack: I think. The big goal should be to not only develop more energy efficient machine learning models, but then also ensure that those are actually being used. And surprisingly, even sometimes within the same company, certain model developments are not being passed onto other parts of the company. So really trying to develop stand up models that are now also being used and practiced is important.

So interoperability of energy, efficient machine learning models. So,

Chris Adams: Someone does wanna look at their stuff and they do want to apply some of these ideas. You spoke a little bit about using some other models. Where would you suggest people look, if they wanted to operationalize some of the kinds of wins or some of the better ways to make green software greener? For example, I realize you've got a paper coming out and you work on this day to day.

So yeah. What would you point us to?

Lynn Kaack: I mean, as I understand, there's a lot of ongoing research in the machine learning community for energy efficient machine learning. So. I don't have any names on top of my head in terms of workshops or community resources, where one can see what are the most energy efficient model types for specific application.

I know that there are some very comprehensive papers also that summarize all the different research approaches that are being taken. But I would encourage you if you are looking for using like a, a deep. Learning models of some kind, just inform yourself quickly if there's also a leaner version of it. So many of them like widely used models, like bird, for example, smaller versions that can almost do the same thing.

And, maybe your performance doesn't suffer much. If you're using a much lighter model.

Chris Adams: Okay, so lighter models and looking around what we have there. And will, is there a paper or a source you might point to

Will Buchanan: It's actually gonna talk about the carbon aware paper that we're about to publish, but I think that's a slightly different track.

Chris Adams: That's up next week, right? So that will be the 13th or 14th of June. That's when that'll be visible. Correct.

Will Buchanan: Exactly.

Chris Adams: Okay. Cool. All right, then there's a load more that we could dive into. We've got copious, copious, copious show notes here. So what I'm gonna do is I'm gonna say thank you everyone for coming in and, and sharing your wisdom and your experiences with us.

And hopefully we'll have more conversations about green software in future. Thank you folks.

Asim Hussain: Hey everyone. Thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please do leave a rating and review. If you like what we're doing, it helps other people discover the show. And of course we want more listeners to find out more about the Green Software Foundation. Please visit greensoftware.foundation . Thanks again, and see you in the next show.