Environment Variables
The Week in Green Software: Modeling Carbon Aware Software
November 23, 2023
This Week, host Chris Adams is joined by TU Berlin researcher Iegor Riepin to talk about the benefits - and trade-offs - associated with load shifting over both space and time. Together, they nerd out over the specifics, discuss numbers, and weigh alternative methods of computing with green energy all around the globe. Iegor and his team did a study alongside Google, where they modeled the entire European electricity grid in order to study the effects of different types of load shifting, and how it can be most efficiently applied to the world of Green Software.
This Week, host Chris Adams is joined by TU Berlin researcher Iegor Riepin to talk about the benefits - and trade-offs - associated with load shifting over both space and time. Together, they nerd out over the specifics, discuss numbers, and weigh alternative methods of computing with green energy all around the globe. Iegor and his team did a study alongside Google, where they modeled the entire European electricity grid in order to study the effects of different types of load shifting, and how it can be most efficiently applied to the world of Green Software.

Learn more about our people:

Find out more about the GSF:

News:

Resources:

If you enjoyed this episode then please either:

TRANSCRIPT BELOW:
Iegor Riepen:
Capacities of wind, solar, and storage required to achieve hourly matching of demand with carbon or clean electricity are reduced when we increase the share of computing jobs, and that means the share of power loads that are flexible. So, in short, demand flexibility makes carbon-free computing more resource efficient.

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams. Hello, and welcome to another episode of This Week in Green Software, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. When we talk about green software, we often talk about code efficiency, because it's something that we're often already familiar with. But as you learn more about integrating sustainability into software engineering, you end up learning more about the underlying power systems that all our servers and end-user devices like laptops and phones rely on too. And because the power systems we rely on are in the middle of a generational shift from fossil fuels to cleaner forms of energy, there are changes taking place there that can inform how we design systems higher up the stack. This summer, a team of researchers from Technical University Berlin published a study in collaboration with Google to help shed light on this emerging field. They modeled the entire European energy grid with open-source grid modeling software written in Python, along with a set of internet-scale data centers, to better understand how scaling their use of compute to match the availability of clean energy can affect the associated environmental impact with running this kind of infrastructure.

If this was interesting enough, But the modeling also revealed some interesting findings about the cost of transitioning to digital services that run on fossil-free power every hour of every day. This sounded like absolute catnip for sustainable software engineers, but because this is also one of the first open studies published around carbon-aware software, it's possible to understand the assumptions behind these results and figure out how they might change in future. So if you want to learn a bit more about this, to help in this quest, joining me today, we have one researcher from the Department of Digital Transformation in Energy Systems at the Technical University of Berlin, Iegor Riepen. Hey, Iegor.

Iegor Riepen: Hi Chris, thanks for having me today.

Chris Adams: Iegor, it's lovely to hear from you again. Before we dive into the world of grids and carbon-aware software and the like, can I give you just a few moments to introduce yourself?

Iegor Riepen: Yes, of course. Hi, everyone. I'm Iegor Riepen. I'm a postdoc researcher at the Technical University of Berlin, where I'm part of the Energy Systems Department. In our department, we use methods from operations research and mathematical optimization, researching the cost-effective opportunities for climate neutrality.

This means I spend most of my time writing, solving, and debugging mathematical models of energy systems. Our research group also maintains an open-source Python environment for state-of-the-art energy system modeling, which is available at pypsa.org and GitHub/PyPSA.com. 

Chris Adams: Okay, cool. Thank you for that. So, I know that I actually met some other folks in the same department as you from actually an event called Clean Coffee that I used to run at ClimateAction.tech. Tom Brown came along to one of those Zoom calls back in 2019 and from there we just kicked it off. So, And that's how I found out about anything going on in Berlin. And I understand that you've been in Berlin for a while, but you weren't always studying and working in Berlin. You've been in other parts of the world as well, correct?

Iegor Riepen: Yes, that's right. I'm originally Ukrainian. I lived up until I was 20 or so, and then I moved to Germany to make my masters' studies, then academic career. Before Berlin, I was doing my Ph.D. in the Brandenburg Technical University, where I worked with Professor Felix Müsgens, and I worked on my Ph.D. in various questions of modeling energy systems.

When finishing Ph.D., I joined, as a postdoc, Tom Brown's group in TU Berlin.

Chris Adams: Cool. All right then. And that's how we ended up here then. Okay, so we're just about to dive into the meat of the show. And here's a reminder that everything we talk about will be linked in the show notes below in this episode. So if there's a project mentioned, or a site, or a paper that we refer to, please do write in and tell us so we can update the show notes for other curious souls and help you in your quest for knowledge. All right, Iegor, are you sitting comfortably?

Iegor Riepen: Yes, I am standing comfortably.

Chris Adams: Okay, all right then, let's begin. So it's really tempting to jump right into the nerdy specifics about this research and so on. But before we do, can you share a little bit about the background from your current research for this to provide the context for the work that we're about to discuss?

Iegor Riepen: Yes, of course. So over the past year and a half, my colleague, Tom Brown, who is the head and the heart of our research group, and I have been working on the open-source, model-based research dedicated to various aspects of 24/7 carbon-free electricity matching. This concept usually goes under the name hourly CFE matching.

The 24/7 CFE is a new approach for voluntary clean energy procurement, where companies aim to match their electricity consumption with carbon-free or clean energy supply on an hourly basis and around the clock.

Chris Adams: Okay, so let me just check if I understand that. So normally when people say they run on green energy, they might be talking about things on an annual basis. And this is so hourly, so it's what's 8, 760 hours in a year. So it's 8, 760 times higher resolution. So you don't have this whole thing where if you say you're running on green power, you're not making a claim of running on clean energy at night, for example, something like that, right?

Iegor Riepen: Yes, exactly. That's pretty much about that. So, it's interesting because the 24/7 carbon-free energy hourly margin aims to eliminate all in-house gas emissions associated with electricity use of an energy buyer. So, the strategy also aims at addressing the main problems that exist in much in demand with clean energy supply.

When using so-called Established Certification Schemes, the audience of the Environmental Variables might know all this pretty well, but I'll just name a few to outline the context. So, companies who would like to demonstrate their sustainability credentials might opt for buying the guarantees of origin, this has a story related to Europe or the renewable energy certificates or renewable energy credits.

This is a story related to the United States. A common feature of these schemes is that renewable energy credits are, so to say, unbundled from megawatt-hours of energy. When using the schemes, there could be several problems that would arise. So one of the problems is that assets procured with these credits, or the credits you procured from certain assets, this might not be additional to the system.

This means if you buy renewable credits or guarantees of origin from an asset located in a particular system, Norway, and the asset was located in Norway, is located in Norway, and will be located in Norway, no matter what your procurement, it's hard to claim the change of the system associated with your matching strategy.

The second problem is typically arising with using guarantees of origin is so-called mismatching location. So if demand is located in Germany, and the energy asset generating the Renewable Credit is located in Spain, there will be hours when the grid between Spain and Germany is congested. So it creates an accounting problem.

It's likely not those electrons were consumed, that were produced. So, these two are the so called additionality problem and location matching problems that arise.

Chris Adams: Okay, so if I just check if I understand that. So one of these issues was about additionality you mentioned. So if I understand, what you're saying is, let's say that you got some form of generation. Yes, you get paid for the power you generate. But there's another thing that people are paid for, which is like the kind of greenness of it. And this, as you say, ends up being unbundled and traded separately and there, you, basically a bunch of problems tend to happen once they're unbundled in that scenario there.

So you spoke about something in Spain like, yes there might be a grid where you could theoretically have a solar power, solar farm in Spain generating electricity, but whether it's actually really deliverable to somewhere in Germany is another matter, because it might not be physically possible to deliver that power.

So there's, this is some of the kind of complexity that it's trying to address here, if I understand that right.

Iegor Riepen: That's correct. And some energy buyers do think about this and recognize these problems, and they opt for power purchase agreements. These are bilateral contracts between buyer and supplier. And when signing power purchase agreement, these companies would pledge to buy both the energy, so megawatt-hours and the environmental credits bundle to it. The problem here is that under power purchase agreements, renewable energy supply is typically matched over a long period of time with buyers energy demand. So, for example, there is plenty of companies who are joining the known Renewable 100 group who claim to procure right enough renewable electricity to match their consumption on an annual basis.

The problem here is, guess what, renewables are not generating at all hours throughout the day. And this concept is sometimes called in the literature as renewables are non-dispatchable. Sometimes it's called that they're intermittent. Sometimes people say that renewable energy is variable. There was a recent awesome podcast from Volts, where host, David Roberts, had Jesse Jenkins, who suggested the term weather-dependent fuel-saving technologies, which I think is an excellent way to put it forward,

Chris Adams: yeah,

that's 

Iegor Riepen: the way we will likely use it today. So when energy buyers sign PPAs with weather-dependent fuel-saving technologies, there definitely will be times when generation is low, so energy buyers have to depend on procurement from the local markets that likely have some carbon content in it in a given point of time.

Chris Adams: Okay, so, just a quick translation. You said weather-dependent fuel-saving technology. That's opposed to, essentially, if I'm going to be burning gas or coal, I don't have to care so much about the weather, but I'm having to burn a bunch of stuff, and the whole combustion of fossil fuels is one of the problems we have here.

So, that's how you might frame it, right? So, because once you've built something, you don't, presumably, once you've got, say, wind or solar or something like that, it's renewable, so you're not having to purchase the fuel. That's the distinction that we have there, right?

Iegor Riepen: So the term which I referred, which was coined by Jesse Jenkins in that podcast, referred to as dependent fuel-saving technologies, refers to the content that when we use renewables, we do not have to buy fuel, which comes at cost. We do not burn fuel, which comes with carbon emissions associated with burning it, and we don't have to think about the fuel.

Chris Adams: Okay. All right. Thank you. Now we've got a kind of, we've got some of these terms sorted out now. Now we can talk about how some of this might work in the context of data centers and things like that. So maybe I should ask you a little bit about this study. So we've, we've done this work now. Now maybe you could just talk a little bit about what the study that you were doing with Google and with TU Berlin was about for this.

Iegor Riepen: All right, so we have an ongoing research projects where we collaborate with Google. As audience might know, the company has claimed a commitment to achieve 24/7 carbon-free electricity matching in all of their data centers worldwide by 2030. And from our side, we bring an academic look to a broad range of questions relevant for making data centers, and more broadly, any consumers from commercial and industry sectors carbon-free. These questions are like, how can one achieve hourly matching of demand with carbon-free electricity? At what cost this comes? How can advanced tech help? Or what would be the impact on the background grids? We released a series of open studies aiming to address some selected aspects of this complex question. So, where demand flexibility comes to the story? Demand flexibility is the degree of freedom that companies pursuing their goal of reducing carbon emissions can use and benefit from. This refers to a broad range of companies with various degrees of flexibility, which takes many forms and colors, mostly of temporal demand management, and perhaps even more interesting, this refers to the demand flexibility of computing infrastructure or data centers that can be geographically scattered and managed collectively by one entity or one company.

So, data centers as electricity consumers have some special perk. They have an ability to shift computing jobs and associated power loads in space and time. And in July, this year, we released a new study where we focused specifically on space-time load-shifting problem. So we look at the role the space-time load-shifting can play in reducing the costs and resources needed to achieve perfect 24/7 matching.

We thought about the signals that companies can use for shaping their load, and we also looked at the trade-offs and synergies that arise from co-optimization of spatial and temporal load-shifting.

Chris Adams: Okay. So I'll just try to make sure I follow that from a kind of layperson point of view. So there's basically maybe two things that came out of that. So one of the, one of these things is this idea that in addition to, if we know that the amount of power we might have be generated from clean sources might change over time, it sounds like there is a chance to, rather than have to generate more, just actually make supply and demand match just by scaling back some of your own energy usage.

And this is something that isn't just done inside the technology sector. For example, I know that buildings might do this to cool a building down where the energy is cheap, for example, at night, so that when you walk in, it's nice and cool, for example. Or if you're in Texas or somewhere, where it's a, it's a nice, comfortable environment.

So it's taking, that's the kind of moving things through time. And that's one temporal thing, but there's something special about computers and data centers in that. Rather than just moving the energy, you can move the work to somewhere else where they have an abundance of power. And that's the kind of special thing that you ended up doing a bit of study, studies into, right?

Iegor Riepen: Yes, exactly. We focused both on space and time flexibility. We write in the study that space-time nowadays mostly is a story of computing infrastructure, data centers. Temporal flexibility applies to a broad range of companies who have various forms of temporal demand management.

Chris Adams: Okay, cool. All right, then. So we've spoken about that. And as I understand it, maybe I should just ask a little bit about, so what's the benefit of doing it this way? Like, why would you even think about trying to match some of this stuff up rather than just buying a bunch more green energy for example or a bunch new solar farms. 

Iegor Riepen: Well, thinking about this, imagine a company with an inflexible demand wants to match its own consumption with carbon-free electricity. It could source carbon-free electricity from various sources. This would be a combination of re-imports if the grid is clean or clean enough for your purpose. Uh, it could be generation of renewable generators procured with power purchase agreements and dispatch of storage assets. This is a pretty challenging task, since battery storage is helpful for bridging some hours of no wind, no solar, but it's not the right technology or better frame, not the right economic technology for times when you need to firm weather dependent wind and solar over an extended period of time.

So, pre research done by Princeton University, NetZeroLab, folks from Peninsula Clean Energy, and also from us last year, did show that 24/7 CFE hourly matching is possible with commercially available technologies like wind, solar, and lithium ion batteries. But it comes at large price or cost premium and with some curtailment of renewable electricity.

If consumer following our imagination have an access to advanced tech like hydrogen storage or clean firm technologies, their price premium could be reduced. Data centers could use this special perk, so the ability to shift loads over space and also time, to relax this problem of matching demand with carbon-free electricity supply.

Chris Adams: Okay, so what I think you might be referring to here is the fact that because there's this flexibility that reduces the amount that you need to buy to have ready, uh, to match the entire time. So it may be that you don't need to buy so many batteries or have so many wind turbines or something like that.

Yeah.

Iegor Riepen: Yes, so put it simply, you can move loads from the places where you have access to the loads where you don't have enough carbon-free electricity, and by moving the slots you could possibly save potential storage needs, and you can also save or reduce the amount of excess electricity you would have that you might have curtailed otherwise.

Chris Adams: Okay, and by when we just briefly touch on curtailing, that means that rather than just wasting, just not being able to use this energy, you're, you're able to put it to use to some kind of productive use, which basically improve the economics means like you might reduce the cost of running something, for example, so it might pay for itself doing something like this.

Iegor Riepen: Exactly. If you have signed PPAs with an asset or you built an asset on site, you likely have some access of energy in some uh, hours. You could sell this access to the background grid, if the background grid takes it at a price, you could potentially store it, but you would store it up until point. The storage is economical and some part which you would not store or you would not sell.

You would typically curtail.

Chris Adams: Okay, cool. And curtailing for the purposes here is basically having to throw it away because you can't use it productively. All right then. Okay, thank you for explaining that. So we spoke a little bit about some of the details on this, and it might be worth just briefly touching on some of the open approaches for this, because you've mentioned a tool called PyPSA, which I think is Python Power Systems Analysis?

It's something like that, right? And this was, there was a the open part of the study is quite a key thing for, for this project. Is that correct?

Iegor Riepen: Yes. So we use um, um, PyPSA, which is an open-source Python environment for state-of-the-art system modeling. This is a tool which our group develops for and maintains for our own research, but also it's been used by wide range of companies, institutions, NGOs, TSOs, by some others who might find a use in open-source tools.

So PyPSA itself stands for Python for Power System Analysis, as it was originally scoped for the power system analysis. However, nowadays the tool is used for many other applications, but for Power, which includes transport and heating and biomass, industry and industry feedstocks, some carbon management, sequestration, hydrogen networks, and what else.

So the open-source Python environment which we ship or which we maintain includes PyPSA itself, which is a modeling framework, but it also has several individual packages that make it possible to go all of the way through the data processing, such as calculating renewable energy potentials in different countries that we model or collecting energy assets data to creating and solving complex energy optimization problems.

So why would open-source modeling be interesting? There are a couple of things which usually comes in place answering the question. This could be transparency and credibility. So by doing open-source modeling, we show that we have no cherry-picked assumptions. In our, um, studies that would drive certain results. Open-source is also pretty useful for reducing wasteful multiplication of work. We can think about it so there is plenty of energy modeling groups in academia worldwide, but also energy modelers in consultancies or in industry. All of us are doing basically the same job, but if all of us have to create an 

Chris Adams: your own model. Yeah.

Iegor Riepen: Before we fly the airplane, we are not progressing much, and doing open-source is quite helpful too.

Sometimes you can just copy from somebody who did a good job and put it in this open license and go ahead.

Chris Adams: Okay, cool. That's actually, so that sounds quite exciting. So essentially, if someone's going to say, "oh, I think we can decarbonize this industry by this date." You can essentially model it and say, "these are the assumptions I'm making. This is why I think it's possible. This is why I think we can afford it. And, uh, this is how much I think it would cost," for example. And, uh, the idea that because it's open, it becomes easier for, say, policy makers or civil society to basically say," Hey, you've made a really weird assumption here. I challenge that and vice versa." And with it being modern, open, there were, although we've mentioned Google a few times, this could be used by any company or any organization that might also want to see if this would be applicable to them for in running any kind of infrastructure themselves.

Is that about right? 

Iegor Riepen: Yes. When we do, for example, our study, which we today talk about, which is released in GitHub is on a special kit release. And everybody who can run a Python script could reproduce our results, could see how our assumptions are formed, what we put into our optimization problem as parametrization, and if you wish, you could basically get the same result on your local machine.

If you have this way, you can be pretty sure that we did not do some cherry pick stuff to, to shape or drive results in one direction.

Chris Adams: cool. That's really interesting. I didn't know that I could model the entire European grid on my laptop to actually try and settle a bet in the pub. That's quite cool. All right then. So maybe we just, let's go back to this study then. So we've spoken a bit about this and we haven't actually discovered or discussed the findings.

So maybe I could actually ask you, um, are there any particular key findings you'd like to share so far? Or is there any nuance we should be, be aware of before we dive into some of the kind of juicy results here, for example?

Iegor Riepen: Um, yes, sure. I think we could address this nuances topic first. We could just briefly go through the study design and key assumptions driving our results, so the audience would understand where from are. So, for this study, we used the computer model of the entire European electricity system. With this model, we simulated the early operation of the electricity system and the so-called system development, so we looked at the cost optimal investments the system would have in generation and storage assets for the model tier.

We placed five data centers in and we chose the regions where we placed our data center source to capture grids with different sets of features, unique renewable resources, and national characteristics. We assume the data centers have a nominal load of 100 MW for simplicity. This assumption of "What is the exact nominal load of data centers doesn't play any big role in the results?"

If we have smaller or higher capacity assumption, we would observe the same trends. 

Chris Adams: Ah, okay. 

Iegor Riepen: we And we configure our mathematical problem that all datacenters follow 24/7 CFE goal. So, to achieve this goal, datacenters can co-optimize electricity procurement from the local grid and procurement of additional resources such as storage, wind, PV generators that are additional to the system and located in the same meeting zone.

And we assume that datacenters have some degree of load flexibility, which we vary with scenarios. Stepping from 0%, which would mean that there is no flexible workloads, so all loads are inflexible and must be served at places of data center location, up to 40%, meaning that 40 percent of data center loads are flexible and can be shifted to other places or delayed to other times.

This is what we do. What we do not. First, we do not quantify the actual costs and technical potentials of achieving certain share of flexible workloads, we just say, "hey, this data center fleet have some share of flexible workloads. How would you optimize the flexibility utilization and what benefits it might bring?"

We also treat data centers simply as large consumers that can shift a certain share of loads, which is not too far from reality. But what I mean here is that we abstract from the technical aspects and properties of flexible workloads and some physical constraints of quick ramping. There is, uh, tech folks who could focus on these topics with their, uh, uh, with their, uh, knowledge.

And we base our model inputs only on freely available raw input data. So for electricity systems, we parametrize the system mostly from Danish Energy Outlook. And for data centers, we assume pretty generic transparent assumptions. Where, so we by, by this, we try to keep our study design broadly applicable for other companies who might have their specific flexibility shares or forms or shapes.

And we try to keep our workflow available at GitHub for everybody to access and reproduce.

Chris Adams: Okay, so if I understand that correctly, essentially, you're doing this with as much open stuff as possible so that someone can reproduce this, and the assumptions you're making about these data centers, it doesn't, although the size doesn't matter that much, 100 megawatts is about, that's like a medium sized to large hyperscale data center. So this is, again, it's, it's somewhat reflective of the reality. And you also mentioned that they're in different parts of Europe. So I, from memory, I think this was like Ireland, which is like windy and in the West, Denmark, which has loads and loads of wind, there was like a few other places in Europe as well with different kind of generation and different geographics.

So they're in different places. So it was somewhat representative of the regions we might use in cloud, right?

Iegor Riepen: Yeah, so data centers, eventually we scatter it in Ireland, Denmark, uh, west one zone, uh, Finland, Germany, and Portugal. Uh, the idea was that we would take regions with different renewable resources. First, regions that would be pretty far from each other, and also we would take regions where there is data center consumption in national energy mix.

And by that we take different enough regions and we would capture all the system dynamics that we would want to.

Chris Adams: Okay, cool. That sounds somewhat, I can recognize that with my kind of cloud hat on, thinking about running something in Ireland versus running it in Germany in this scenario. Okay, well, that sounds like we've given enough background for this. Should we dive into some of the findings? Is there anything you'd like to, so yeah, maybe I should ask, what was maybe one of the first findings that really caught your attention that you'd like to share from this?

Iegor Riepen: So I think we could go through several steps. That's the first finding that usually has been caught by people looking at our study is this topic of resource efficiency and cost reduction. Just for audience to understand, from our model, as a result of optimization, we get procurement strategies for each data center, which optimize to match demand with carbon-free electricity around the clock with some desired quality score.

So the cost optimal technology mix that we get as an output depends on various factors, for example, renewable resource, this would be wind to solar average energy yield, or cost assumptions and many more. So the clear trend that we observe across all scenarios is that capacities of wind, solar, and storage required to achieve hourly matching of demand with carbon or clean electricity are reduced when we increase the share of computing jobs, and that means the share of power loads that are flexible.

So, in short, demand flexibility makes carbon-free computing more resource efficient. What we also could do, we could retrieve the cost of any procurement strategy from our model, and thus we can map the resource efficiency to the cost effectiveness, meaning you pay less to achieve exactly the same. So the degree of this cost effectiveness scales with the level of flexibility that we assume.

So for the corner scenario where we assume that 40 percent of flexible workloads perfect hourlCFE matchaging and co-optimized space for achieving the overall energy costs of the model data center fleet are reduced by up to 34%. So this refers to the cost saving of an individual data center, but it refers to the cost saving of a group of data centers scattered geographically and managed together by one company.

So these data centers consume basically the same amount of megawatt hours, but do shift their consumption in space and in time to optimize the resources.

Chris Adams: Okay, so if I just, maybe I'm just trying to run that by you so I understand. So essentially, you model different amounts of flexibility in a system if you're controlling multiple data centers here. And essentially, the more flexible you make it, the more you can actually reduce the amount, the cost of actually having to buy all these solar farms and wind turbines and batteries and all the way that, all the way up to the point where, if you're doing 40, if you're got 40% of your loads being flexible, then it reduces the costs by about a third, essentially. That's what I think you're saying there.

Iegor Riepen: Yes, that's right, but we should see it in perspective that this 30 percent cost saving is basically our corner scenarios. We scale the cost very up, so scaling very up means we look at the perfect matching between uh, demand and carbon-free electricity. 100 percent that you are not allowed to have any gram of CO2 peak allowed hour of consumption.

And also we assume in this scenario that 20 percent buyers only have access to wind, solar, and lithium ion storage. So this is a palette of commercially available technologies which are just hard to use out of these technologies to make 24/7 strategy. If these two are right, then your costs are reduced by up to a third, and the costs are reduced less than that for all palette of asset technologies and scenarios that we considered in this study.

Chris Adams: Okay, cool, well thank you, that's, that was bigger than I was expecting it to be. Uh, that's, that's, that's, that's, that's pretty impressive if you're going to be spending, um, literally billions on power, like some large, uh, data center providers or data center users will be using. Okay, then, um, Iegor, so, if I understand correctly, the amount of flexibility you might introduce, this kind of carbon-aware computing, that, if you say that you need to be running everything on 100 percent carbon-free or fossil-free or clean energy, based on this, then this will reduce the amount that you need to purchase, which essentially makes it more affordable or more accessible to a wider number of operators, I suppose. Are there any other findings that you would draw attention to in this study?

Iegor Riepen: Um, yes, of course, what we do in the study, we take a look on the signals that companies might use to shape their load following strategies. So to discuss the signals, we could firstly go for special shifting story and then for temporal shifting story. So for the special shifting story, the one signal which comes up front is the fact that hourly profiles of wind power generation have a low correlation over long distances due to different weather conditions. So as a rule of thumb, you can think of the following. If two generators are located as far as 200 kilometers from each other, the hourly feed in from these assets have very low correlation, and data centers could arbitrage on this effect, or put simply, they could move load to locations when and where there is a high wind generation.

That's saving the cost of energy storage and thus reducing the amount of solar curtailment. So, the hourly profiles of wind generation is not the only signal. Another signal that we discuss in the study is the difference in quality of renewable resources in regions where data centers are located. So, the quality of local resources, or in other words, the average capacity factors of Solar PV in a given region, they translate to the cost of electricity.

The higher the quality of renewable resource, basically the lower the average cost per megawatt hour. Special load-shifting is possible. A rational buyer could just adjust own procurement strategy to contract generators in better locations. So those locations where renewable assets have lower costs and co-optimize special shifts accordingly.

In the study we illustrate this mechanism with a data center located in Ireland, so it's not the most sunny region in Europe, that would tend to shift loads away during daytime through mid spring to mid autumn. So data centers located in Germany and Portugal, the regions with much better solar resources than in Ireland, they would tend to receive loads during this period.

This feature would just work about reciprocally for wind-related load shifts. Data center in Germany would benefit from having partners in Denmark or Ireland that have much better quality of wind resources. And, so there are two signals for spatial shifting. There is one more, which we did not put much of focus in our study because of our geographical focus scope, but it could play a role for spatial load-shifting.

So if you look above Earth from the North Pole, and the Earth would rotate counterclockwise from West to East. And we are pretty sure that it rotates with a constant predictable speed, roughly once per 24 hours. So if data centers are scattered across the globe in distant locations and operated by one company, one could imagine a load-shifting strategy where loads would follow the sun.

Chris Adams: Yeah.

Iegor Riepen: So these are basically three signals for spatial load-shifting for temporal load-shifting story, we illustrate cases when the variability of the regional grid emission intensity could drive the carbon-aware temporal load-shifting. So the grid signal can play a role in load-shifting strategy if data centers have electricity imports from the local grid in their energy mix.

And the temporal flexibility could also be helpful in aligning the demand in time with the generation of procured renewable resources.

Chris Adams: Okay, so it sounds like there's almost two kind of scales you're working at here. So the first thing you spoke about was like, say, Ireland and Germany and Portugal. Essentially, so basically Germany and Portugal are sunnier, and Ireland is windier, and during the summer, they're going to be, Germany and Portugal are way sunnier, so if you were running say computing in these three places, you might choose to run more of it in those two during the summer, and then as it gets a little bit darker, you basically choose to run everything in Ireland instead, and that's going to be a much more efficient way to actually, essentially run, maybe, if you're going to run computing jobs at 100 percent carbon-free energy, that's a way that you can do that at one of the lowest costs.

So that's, there's one thing happening at the annual level, but you also said there's a kind of another thing which is much more tied to the kind of, you know, day and night cycle that you're referring to as well there. So there's different speeds that you might be thinking about, different trends that you might take into 

Iegor Riepen: consideration.

Yes, we talk about pretty complex optimization problems that spans across space, spans across time, and the signals that would drive optimal utilization of flexibility through this space-time graph have various shapes. Some signals have stochastic pattern, like wind and feet, um, which is uncorrelated over long distances. Some signals have predictable pattern, like solar profiles that follows the Earth's rotation. And some, um, some signals have something in between predictable and unpredictable.

Chris Adams: Oh, cool. Wow. That's, I wasn't expecting that. Yeah. So we spoke before about, okay, one thing that you could do is essentially during summer, you're running your computing jobs in Germany and Portugal where there's loads of sun and loads of clean energy and that's relatively cheap. And then in winter, you'll choose to run it maybe Ireland, or somewhere where it's a bit darker, a bit gloomier, but way windier. And, uh, where there's loads, oodles of green energy there. But you certainly have some trade-offs that you have to make here, if you were to choose this. Maybe you could just expand on some of that a bit more, so that people understand, so that it doesn't sound too good to be true, for example.

Or people understand some of the specific nuances here. 

here 

Iegor Riepen: Yes, very right. What we do in the study, we take a look on scenarios where we co-optimize um, and Isolate Utilization of Spatial and Temporal load-shifting. So these scenarios with isolated flexibility can be seen by just academic exercise, but it's pretty useful for us to take a look on the system mechanics and get a feeling of the numbers.

So as a result of utilization, we can retrieve the value, which would represent something like reduction of the overall annual energy cost of a carbon-free electricity supply. If a data center utilizes either spacial load shifts or temporal load shifts or both, if we compare the value of spacial and temporal load management, uh, when, um, spacial and temporal stories are isolated, we come to the numbers of something from 6 to 1, depending on the scenario, in favor of spacial load-shifting.

So this means, spacial shifting workloads across locations brings you 6 to 1 high amount, high value. And this takes place because datacenters can arbitrage on differences in weather conditions and take advantage of. So this is a mechanism which we have just discussed. Shifting workloads across time to bring a higher value, uh, requires a few things.

So for that to, to have a high value, datacenters would need to buy electricity from the background grid, which is high variability of the original grid carbon emission density. So if a local energy mix is flat, dirty, or flat, clean, there is no, basically, value to shifting workloads from one time to another.

And the tr-de offs would appear uh, between spatial and temporal load-shifting when both are implemented together. So one can think about this in this way, if you have a certain share of flexible loads and you would like to shift some in space and some in time, but whatever you shift from other places and other times cannot exceed the upper cap, which would be the computing capacity constraint.

And whatever you shift away, that means to other places or to later times, cannot seek the lower cap, which would be the flexible workloads cap. And, uh, one thing on synergies, what we do show is that co-optimization of space and time load-shifting can yield benefits that go beyond the value of each of the two individual mechanisms could bring alone.

It's sort of an expected outcome for any operation's research problem. If you have two degrees of freedom, and you co-optimize them, uh, you could co-optimize in a way to get the benefit from synergies of them both. So in the study, we come to this point from various angles, but here's just a good example.

Imagine if you have, say, three data centers scattered far from each other and operated by a single entity. Then imagine we have each data center, which has a mix of wind and solar capacity built on site. Let's assume that this data center can shift workloads with any fixed volume of flexible workloads.

Now, somebody comes and says that, "hey, we have one long duration energy storage asset that we could place in either of the three data center locations."

Chris Adams: A big-ass battery, basically. Yeah?

Iegor Riepen: Yeah, the question is here, where would you place it to reduce the energy cost of the entire system? If you would write an optimization problem for this, solve it uh, we would tend to see that the optimization problem suggests us to harvest renewable electricity in the best locations. So those locations where the lowest cost per megawatt hour is achievable. For example, Denmark, Ireland, with good wind conditions. And integrally opening access to this cheap, clean electricity for all locations through the spatial load-shifting.

Chris Adams: So that thing you mentioned there was, you've got this notion of moving things spatially or moving things temporally through time. So it's, essentially, you get more of a gain from moving things geographically, spatially, if you're going to do nothing, if you just only do one. But that can be a little bit harder for organizations. So there is some gain from doing things temporarily, but on the temporal scale, you do need the grid to be a little bit more volatile, moving back and forth between very suddenly and then not very suddenly, for example, so they start using back, go back to dirty energy essentially, but the thing that you can do is these do work together, so you can move through time and space, and you do end up with a, basically the benefits do compound in this case here.

Iegor Riepen: Yeah, and it's worth saying that the benefits actually do not compound simply together, because the spatial and temporal stories are subject to the shared set of computing capacity constraints. So when you co-optimize both, you inevitably have to trade off among them.

Chris Adams: Okay, compound is the wrong word then. All right. But basically, the, by doing the two things together, you can get, you can get a better saving than just doing one of them by themselves, for example.

Okay. All right then. So we've spoken about this, how it's been applied to one company. And we've said that this could be used for multiple organizations. Um, presumably, if you... If someone was to do this, you could do this for an entire sector to figure out what the power might be needed for an entire sector to see how much you might need to deploy to, to displace all the kind of fossil-based energy generation that data centers use, for example. Is that plausible that you could do something like this with this kind of modeling?

Iegor Riepen: Um, well, in our study, we try to keep our assumptions on carbon-aware computing, in the way how we treat data centers, how we treat flexible workloads, as general as possible. So the study results should be applicable to the broad range of companies operating data centers, with their specific features and their specific workloads.

And as well as the study should be applicable to a broad ranges of companies from commerce or service or industry sectors for which, say, only a temporal story is relevant. One cool thing here is that data centers can pave the way for space-time load-shifting applied for other industries. Other applications we are not yet even aware about.

So just to mention, I recently visited, um, the group in the University of Wisconsin, Madison. It's Victor Zavala, uh, Scalable Systems Lab. So this is a bunch of awesome people. I spent with them three weeks. I'm not really sure what they work about, because they work about computational chemistry, on energy systems, on graph theory, optimizations, programming.

I think they crush every problem that people are throwing at them, and when I was there, they were launching a new project which was focused on exploiting space-time interdependencies between electrochemical manufacturing and power grid. So the idea here is that the electrochemical industry would shift loads so as to co-optimize the economics and to reduce the carbon emission intensity of the electrochemical manufacturing, which is pretty dumb.

Uh, because before I thought that, that space-time shifting is more about data centers only. But now, well, there are applications beyond only this sector. And I think this is a future where we are going to.

Chris Adams: Okay, so basically, as more and more, as more and more clean and, what's it, variable fuel saving technologies come onto the electricity grid, it's going to get more and more kind of upy downy variable. And it's not just data centers this would be applicable to. So electrochemical stuff would be like synthesizing fuels or making plastics or things from carbon captured into, by the, from the air, for example, or things like that. I think Tom Brown mentioned a little bit about making methanol in this kind of way, or some of the green hydrogen stuff around splitting water into hydrogen and oxygen for creating chemicals that way. So is that what you're referring to in this scenario?

Iegor Riepen: well, I don't know what they will do in the project. It will be very interesting to take a look. I'm not sure that it's very simply mapping one to another. So for data center special shifting means that you move. Workloads and associated 

Chris Adams: space, that 

Iegor Riepen: one place to another. But this moving means that just, um, computer jobs are being executed in one data center and not in another data center.

While the end consumer is somebody waiting for the YouTube video to be rendered, consuming the goods at the place it would be consuming no matter the shift. How exactly it works for the electrochemical industry, not sure, but we will see once the project is developed.

Chris Adams: I see. Okay, cool. All right. You mentioned one thing about this being something which is more generalizable to a wider set of technologies, and one thing that actually this makes me think of is a new paper that was published by Facebook. They have a serverless platform which they call XFaaS, and one of the key things that they were doing is actually having this kind of geographically movable computing that they refer to, and one thing that, what really strikes me is that Facebook basically said by allowing the actual computation to be flexible in terms of where it's actually run, they were able to massively increase the use of the data centers they were using.

So if you think about it, for most data centers, there might be a single digit percentage utilization. So most of the time, not doing very much. Cloud might be estimates of around 10, 10 times more efficient. So maybe 20 to 30 percent efficiency for most very well-run hyperscale data centers. Facebook themselves say that we've, by introducing some of these ideas, we've been able to get up to 66 percent utilization, which basically means there's a bunch of extra hardware that they don't have to buy and build, which is good because they are spending lots and lots of money building with data centers in lots of places and anything that you can use to reduce the number of data centers you need, in my view, is a good thing because that's a lot of buildings that don't get built, for example. But the key things they mention in the paper, and we'll share it in the paper, we'll share the link to the paper, is that if you have maybe a computing job or you've got a function or anything like that, you basically have like, they make this stuff possible by adding kind of deadlines or saying how tolerant of being moved through time or moved through space a particular job might be. And this feels like, this kind of hints that this might, might be a way that might become a norm for working with computing, where if you, if you don't need to have something happen right away in the same place, then you can basically get all these extra benefits by being a bit more flexible about this and saying up front, saying this stuff up front when you submit a job to a computing cluster or something like that.

Iegor Riepen: And it's pretty relevant to ask the question, it's not only how, what are the benefits for the operator of the data center, but what are also the benefits more broadly to the background system in terms of costs and emissions.

Chris Adams: All right. Okay. So we've, I know that you've just, you released a study in the summer and there's some stuff which people can refer to here, but I also am aware that we are in a fast moving field and you just mentioned some work in the University of Wisconsin, I think. Are there any other things you would like to have included into this kind of research, or you think people should be looking at over the next 12 to 18 months inside this, that might influence how people might think about carbon aware computing or set of changes? So, flexible computing like this with a view to reduce the emissions associated with running infrastructure that we all rely on right now.

Iegor Riepen: Maybe one study or research paper which I would love to see is that if somebody would take the courage and to illustrate the system-level benefits of carbon-aware computing across different contexts and different states of the system. So by system-level benefits, I mean from a society perspective, so we look at the costs or total carbon emissions or total curtailment of renewable energy and so on.

And by different contexts and states of the system, I mean the following. Let's think, nowadays, there are mostly companies who buy electricity from the local grid, they have some flexibility, they would go to the data providers, such as electricity maps, providing the short term forecasts, or carbon emission intensity, and they would factor it in, into their load following strategies.

This can work for the temporal shifting, also soon will work, I believe, broadly for the spatial load-shifting. So in this case, space-time shifting can help if you just buy from the grid. So there is a follow-up to this, so some companies might go beyond that and buy additional resources to eliminate all of their carbon footprint completely.

So for that, space-time shifting could also help, and this is basically what our study is about. It would help you to be more resource efficient, it would help to be more cost effective, it would open 24/7 cFE for a broad palette of companies who would not maybe jump there otherwise because of high cost premium.

And in the future we will hopefully be reaching the net zero electricity systems or more broadly net zero energy systems. And space-time shifting can be of help there too. So we would need some set of solutions where we would firm the variable wind and solar. We could think about a palette of solutions on the supply side.

It could be grid connected battery storage. There could be hydrogen storage if in the region where there are salt caverns. Or there could be even energy storage in the liquid hydrocarbons like methanol storage. My colleagues Tom Brown and Johannes Hump recently published a paper on this. But these are all the solutions from the supply side.

There could be solutions on the demand side, where there are large data centers that can move large loads across space and time. They could help the system to firm the variable wind and solar and provide the service for the system and get some sort of remuneration for that. By the way, Victor Zavala's group has also published a research paper where they make a mathematical modeling sketching out what type of remuneration they can get for providing the service for the system.

Chris Adams: Okay, if I could just quickly stop you there for a second because I want to check I understood it correctly. You're essentially saying that rather than it just being about looking at the cost only to say, in this case, it was like one tech firm looking at how much it would cost them, you're essentially saying it's possible to model this to say how much this kind of flexibility can save everyone else. If you actually had these providers, like a data center as a kind of active participant inside the grid, because that might reduce the amount of generation that the grid might need or that society might need. So essentially, it's like flipping it around saying, well, actually, is there some kind of value that can, or are there benefits that could be shared just outside of just the corporate, just outside of that company? Can it benefit other people as well?

Iegor Riepen: Yes, exactly. And more, more than that, so whenever we look at this context, either your company buys electricity from the grid and tries to move load across space and time, either to reduce costs or to reduce emissions, depending on what signals the company takes. Or if the company goes 24/7 and wants to eliminate all of the emissions and have a high impact on the background grid.

Or even if we even reach net zero systems, in all of these contexts, space-time load-shifting might be of help and might bring benefits, both for the companies operating it and also for the background systems. So if there is a study who would take courage to highlight all of this transition phase and illustrate the benefits for systems, this possibly would be a really good read.

Chris Adams: Cool. All right. Well, that sounds like something for people who are curious about playing around with this on GitHub or want to mess around with some of this modeling themselves to see. And I know there are a number of organizations and people like in software development houses who are actually trying to extend various tools like Kubernetes to incorporate some of this stuff so that you can actually see, so you can essentially design from a very, from the get go, just like I mentioned with Facebook.

So like Facebook's XFaaS paper talks about how they encode a degree of tolerance into this. But I believe that Intel is doing something similar to this for their versions of Kubernetes. There's, I'll share links to that for people who are listening. This has been pro, possibly one of the nerdiest episodes we've ever done, but I've enjoyed myself for this, Iegor.

Thank you, Iegor. But before we wrap up, are there any things that if someone. So, if someone did, if someone has followed this and was able to keep up and was really curious and would like to learn more, where would you direct people to look if they wanted to dive into this some more themselves?

Iegor Riepen: Well, if people would like to know more about our research on spacetime shifting or more generally on 24/7, they could possibly visit our GitHub page. So it's github.com/pypsa/247cfe. There in readme we explain what other research we're doing, how to clone our work on how even to reproduce it.

If people are interested in PyPSA ecosystem for open-source energy modeling, they could visit PyPSA.org or more generally if people are interested in the open energy research in general, it is living an open mode initiative do to which collects various research groups and open models all about energy.

And finally, if com, if there is somebody interested in voluntary energy procurement, one could visit 27 Compact, which is gocarbonfree247.com, which is place collecting people and companies working on this.

Chris Adams: Great. And Iegor, if people want to find you or follow some of your work directly, is it TU Berlin the best place or is there an Iegor Repin on LinkedIn or something that you would direct people to for future questions?

Iegor Riepen: I have my LinkedIn, which I could attach and there is TU Berlin email.

Chris Adams: Brilliant. Okay. Well, Iegor, I've really enjoyed this. Thank you so much for diving down into the depths of CarbonAware and time-space shifting computing like we did today. Oh yeah, and happy birthday, by the way. I forgot. Yeah.

Iegor Riepen: Thank you, Chris. Thanks for having me today.

Chris Adams: All right. Take care of yourself and yeah, have a lovely week. Cheers, Iegor. Hey, everyone. Thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please, do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware. foundation. That's greensoftware. foundation in any browser. Thanks again, and see you in the next episode!