Environment Variables
From Carbon Aware to Carbon Intelligent
August 8, 2022
In this episode of Environment Variables Chris Adams is joined by Colleen Josephson of VMWare, Philipp Wiesner of TU Berlin and Sara Bergman of Microsoft as they discuss the opportunities with making first carbon aware and then carbon intelligent computing. Variability, curtailment, disaggregation, 5G, 6G (!), delay-tolerant networks, intermittent computing, IoT and even a short segue about Raspberry Pi’s all make an appearance in this action-packed episode!
In this episode of Environment Variables Chris Adams is joined by Colleen Josephson of VMWare, Philipp Wiesner of TU Berlin and Sara Bergman of Microsoft as they discuss the opportunities with making first carbon aware and then carbon intelligent computing. Variability, curtailment, disaggregation, 5G, 6G (!), delay-tolerant networks, intermittent computing, IoT and even a short segue about Raspberry Pi’s all make an appearance in this action-packed episode!

Learn more about our guests:

If you enjoyed this episode then please either:

Episode resources:

Talks & Events:


Open Source Projects:

Blog Posts & Articles: 

Transcript Below:
Philipp Wiesner: So I think we're only touching on the actual potential of how much flexibility there is in many workloads. And I think this is also one of the biggest challenges in the entire field, how to identify opportunities for flexibility, and then especially how to make schedulers aware of these opportunities.

Chris Adams: Hello, and welcome to Environment Variables brought to you by the Green Software Foundation. In each episode, we discussed the latest news and events surrounding green. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams.

Okay. Welcome to Environment Variables. On this episode, I'm joined by Colleen Josephson of VMware, Sara Bergman of Microsoft and Philipp, Wiesner of TU Berlin. Today we'll be discussing the opportunities and the challenges associated with making software first carbon aware and then carbon intelligence.

So before we dive in, though, let's have a quick round of introductions.

Colleen Josephson: My name is Colleen Josephson, and I am a research scientist at VMware. And a lot of what I've been doing at VMware has been focusing on sustainability for the past couple of years. The core of it has been in the telecommunication space, actually, which a lot of people don't know that VMware has a business unit dedicated to that, but I switched teams in the past couple of years.

So teams have been moving around and now we're underneath the office of the CTO. So I've been doing much more broad and general sustainability work throughout the company. And as part of that, I'm actually the org representative for. VMware in the Green Software Foundation, we are members.

Philipp Wiesner: Yeah. Hey, I'm Philipp. Thanks for having me here. I'm currently doing my PhD in computer science at the technical university of Berlin. And my research is on ware and renewable Web computing in the cloud, but especially also in novel computing environment, such as fork and edge computing.

Sara Bergman: Hello. My name is Sara Bama. I am a software engineer at Microsoft where I work with Microsoft 365. I am also the chair of writers project in the Green Software Foundation.

Chris Adams: Okay. So every episode we talk about green software and today we're talking about carbon aware and carbon intelligence software, and, well, we all know that electricity has to come from somewhere, but not everyone, not. We don't always think about how it's generated. So I'm just gonna open this up to, uh, for someone who's done a bit of work in this for the uninitiated, what is carbon awareness in the context of computing?

Philipp Wiesner: So the idea behind carbon Web computing is not to save energy, but to use the, the right energy. So that's green or renewable energy. And this has to do with that. The carbon intensity, which is the, how basically how dirty the, the energy in the public grid is this varies over time. And of course it.

Different and different. And in ware computing, we're basically trying to exploit that. So we are trying to shift computational load towards times and towards places where we have green energy or at least where we expect green energy.

Chris Adams: And if I understand it, you've done some work in this field specifically. And there was a, there's a paper. I think that I saw from you. Let's wait a while. Maybe you could just briefly touch on this and then we can open the floor up to some of the other people here.

Philipp Wiesner: yeah. Sure. So in let's wait a while, this is basically an analysis of the potential of temporal workload shifting. So I just touched on it usually. Does come where computing there's two dimensions to it. You can either shift workloads on geo distributed data, data centers, like on a location scale, and you can also defer workloads on a time scale.

And this paper addresses the time scale. So we're basically looking at single data center and within the data center, we have certain workloads that are maybe not urgent. So it doesn't really matter if we compute 'em right now or in four hours or maybe in 10. And on top of that, there might also be scheduled jobs that are always scheduled.

For example, nightly, like nightly builds database backups, and so on. There's plenty of jobs that are scheduled nightly basically to not disturb anyone. But it doesn't really matter if we compute these jobs at one in the morning, two in the morning, five in the morning, as long as they're outside of business hours, but per scheduler for carbon Web scheduler, this makes a big difference because if it has 12 hours of flexibility, Using forecasts for renewable energy or for carbon intensity, it can decide on when to run this workload.

And that can make a very, very big difference. And in this paper, we basically analyze the carbon intensity for different regions and try to identify time windows where shifting is very promising.

Chris Adams: Okay, cool. And Colleen, the reason I, I came across your work initially, when I saw the, I think it was ACM energy. You did a talk specifically about this in the, in the context of your work with VMware as well. Is. Case.

Colleen Josephson: Yes. Yeah, it is Victor and I, Victor is over in the VMware research arm. We. That talk together. It's been a kind of a growing area of interest for VMware. When I arrived, it was just after we had partnered with the NSF, the national science foundation in the United States. And we actually specifically set aside some funds for a call on sustainable digital infrastructure.

And we funded, I believe. Three different projects from that area. And we've been working with those academics on the different aspects of overlapping a lot, actually of what Philipp was just talking about. How do you take data centers and make them more carbon aware through, you know, one aspect we discussed a lot is shifting through time and space and we we've done some, you know, early investigations and.

Crunched some numbers. And we've also been working with in particular, Andrew Chen at the university of Chicago. He has been working with Lena rad at the university of Wisconsin, and she has great expertise in kind of the power distribution. Side of things, which has been a really interesting consideration as we've talked with them.

Cuz when you start to think about, you know, beyond a single data center, you know, multi-cloud large customers, people who have workloads and data centers across countries or regions, when you're shifting things and in space, you can actually have a significant impact on the grid. The, these big data center providers, you know, customers that use VMware software M.

Amazon so on. If they are shifting their workloads around, you have the carbon intensity benefit. But one thing that, you know, our academic collaborators made aware to us is this very interesting fact that we will in turn be modifying kind of the, the economic dynamics of the grid have. You know, potentially the actual operating capacity of, you know, how power is distributed.

So that's been a really interesting angle to think about from kind of a major provider point of view. And how do we take these really interesting and promising ideas and begin to scale them up from, you know, a grid provider scale point and another thing that customers have brought up to us. Data borders, you know, if moving things from within, you know, one corner of the United States to the other corner of the United States is typically not a problem, at least legally.

And you might have latency trade offs, which need to be part of the equation. But when we went and kind of pitched our ideas, To people over in the EU, a lot of people were, you know, pretty concerned about the fact that the, the borders are a lot closer. So , you have customer data and it can get a little bit trickier moving it from one region to another.

Sara Bergman: As someone working for a major cloud provider and based in Europe, this is of course something that, that touches in my interest or peaks my interest because I think this is really interesting, at least for Microsoft, we have this Microsoft runs on. We have the cute running t-shirts and everything. Like it's a core part of our business.

And I know that is the case for a lot of customers, only Microsoft. So this is an important part of the equation, um, to make this. not only theoretically possible, but actually possible at scales for where we can solve this for the large business customers. But I think I watched one of your talks as well.

Colleen, I think it was, we can, you also talked about the, the difference throughout the day, because even if one country, which is in Europe quite small, doesn't have a lot of different energy providers. Not only can the. Type of energy that's generated change throughout the day, but also what is competing for that energy will change throughout the day.

Like when we're all standing up in the morning and everyone's taking a shower before work and, you know, it's, I live in Norway where everyone charges their electric car. Of course there's gonna be higher load on the grid versus in the night, there is a lot less, for example. So I think that's an interesting aspect as.

Philipp Wiesner: Yeah, I think directly to add on that this variability can be quite dramatic. So in France, for example, they have clean, not clean maybe, but like, oh, carbon energy, because of all the nuclear power they are deploying throughout the day. So there you have barely any potential. But then there's regions like Germany, for example, which are very interesting because they're super variable, like Germany employs comparably, much wind power as well as solar power.

So at many times of the day, they manage to have large fractions of the grid provided by green energy. But if neither sand nor wind available, we burn brown cold, which is pretty much the dirtiest of all fossil fuels. And this is why variability is really crazy. Like within a normal day, you can expect twice or like 50% fluctuations.

Like could be that one kilowatt hour that you consume now is twice as dirty. If you consume the same kilowatt hour, few hours later. And within a few days, you can even see like the difference between the min and the max can be factor four or something. So one kilowatt hour can really vary from 100 grams co.

Up to 400 grams or more 500 grams.

Colleen Josephson: Yeah, this variability, you know, particularly with wind and solar and renewables like that, that touches on something that we covered in our talk, which is the, the idea of curtailment, which is a really interesting opportunity. I, before I started in this area as an energy consumer, I had no idea about this concept, but.

Basically it's this, it gets into this supply and demand relationship that I was talking about where energy providers, they. Want to match the energy available to the energy being consumed. And if the demand is low, uh, as Sara was talking about there's times where people that you wake up, you take a shower, you know, or you get home and you, you start cooking or watching TV.

There's an ebb and flow of power, power. Is consumed. And that admin flow does not necessarily match with how nature is behaving. So if people aren't using a lot of power, the grid can't accept it. We, we not yet. Anyway. So what we have to do is actually burn it off, which I thought that was pretty shocking.

When I learned about it is we have all these renewables and we're just letting it go to waste at the moment. So there's this. Untapped potential as we start thinking about these workloads that can be deferred or moved in time or space. Well, when these curtailment conditions occur, can we pull from our back pocket, some of these compute workloads and start basically increased demand.

So we don't let this clean energy go to waste.

Chris Adams: Okay. If I'm bring spring it down for a second, just to make sure I understand the idea that you have here is that you, without making any changes to code itself, you're not changing the code. Really. You're just deciding to run the same code either at different places. In a kind of sci-fi move it through space or through time.

And this is going to have this, this is basically a way that you can essentially reduce the environmental impact of something without necessarily having to redesign it or from, from scratch. That's what it sounds like you're saying in this case. Yeah.

Colleen Josephson: Yeah, that that's, that's one way to put it. And of course, you know, the very process of taking a workload and moving it in itself has kind of, maybe you can think of it as meta code and it can be enormously complex to get all of that. Correct. But let's say maybe, you know, the core of what you wanna do is some training for a artificial intelligence model.

That same exact training is happening. You're just picking it up and plopping it in a new place or a new time.

Chris Adams: And when you spoke about curtailment, just then there was this idea that essentially because you can't get the energy, that's generated to someone who's able to consume it, it is essentially wasted and you're not able to make some kind of. Effective or economically useful use of it. That's the, that, that, that I'm sharing this largely for folks who might not have heard some of these terms, because if you're a software engineer, you might not have heard of curtailment or any of this stuff here, actually, we've spoken bit about carbon aware.

So we have this notion of you can be carbon aware in that, knowing that there are natural cycles on the grid that you can respond to. And Colleen, I think you mentioned this idea of, is it carbon inte. Was in one of the papers that you spoke about recently? I think I, I, I'm not sure. I think it was in your most recent paper.

It was actually kind of exciting for me, cause I haven't, I hadn't heard this term before and it feels like one of the next steps from this notion of carbon oil, perhaps it was among, I heard at hot carbon, which is basically a conference specifically for folks doing this kind of work.

Colleen Josephson: I think that actually was not our paper looking through it. Maybe. Carbon my colleague that one of the other authors spoke about it and that could be something he said, but we definitely talk about carbon aware computing in our paper. But I think that that in itself is kind of an interesting point to bring up is these terms are.

New. And they're very new because the conversations and I mean, there are some people who have been working in this area for a long time, but in the past couple of years, I think that. Interest in work in this space is really accelerated in pick up and I'm seeing more venues than ever in which to present this work.

So I think carbon aware versus carbon intelligence is, you know, kind of an, an artifact of the fact that we're still pretty early on in the. The conversations that we're having and standardizing on our terminology , which is, you know, Green Software Foundation itself has a standards group. So I guess this is a case in point why standardization is so important.

Philipp Wiesner: I think Google is using the term car intelligence across their papers, but I think it's just terminology. I think they all mean the same.

Chris Adams: Okay. Alright, we'll go with carbon aware for the time being in that case until we hear otherwise. So essentially when someone says carbon intelligent and carbon aware, it's more or less the same idea of essentially. Making your work or the workloads are making them kind of sympathetic to the natural cycles that I guess surround us really.

Colleen Josephson: Yeah. And I think to add to kind of this hierarchy, what we think about in VMware is kind of a few different levels. One of the most important things that a lot of companies are still struggling with right now, Is just how much carbon are we consuming through our operations and all the different scopes.

It's really hard to answer that question right now. How much does this application does this container? What, what, however I choose to slice it. What, what is the consumption of, you know, carbon and other re resources and then. From there, you know, that that's the classic. You have to know, you know, be able to measure the problem in order to solve it.

And then the next step is once you have this really good visibility into what you're consuming, you can then start to optimize for it. And that's where we get into kind of this awareness, uh, intelligence managing your carbon.

Sara Bergman: Yes. And, and I think that. Visibility is so important. And I think it's important to get the right visibility to the right people. I think scheduling of, of some sort isn't really new. I mean, we've had the supercomputers scheduled the work for them for quite a long time, but this type of scheduling is of, of course new.

But if you, if you get these concepts to people who have domain knowledge, then that's a concept they know, and they can say, oh, you know, This job can wait a while because they have the domain knowledge to make use of this technology. And, and that's where it becomes really powerful when we have the, the visibility to allow everyone to participate in it.

Because yeah, also, like we said, I think more people want to contribute and want to do something. And if you make it really easy, well then all the better.

Chris Adams: Okay. So I have to ask, are there any, so let's say you can tune things to basically. Take advantage of electricity when it's cheap and green and, and abundant. Are there any kind of measurable savings that are actually published or out there? Because this sounds great, but surely it does. Are there any early results to at least give us some idea of what kind of savings could be gleaned from this before you think you start thinking about redesigning?

Like, let's say you right to do you, you've got an app and you are trying to reduce the impact. Is there any kind of measurable reduction in carbon on this kind of work? For example, what, what kind of figure. Have been sh have been coming up so far from.

Colleen Josephson: I think what Philipp was saying earlier, really lines up what we've been finding these swings of 50%, depending on your region. You know, that can be that the high end, we saw a carbon savings of 50%. If you're in a highly volatile region and then even in places where it's less volatile, there's still.

This cycle and, you know, that tends to be more like 10%. And I think it's really hard to say right now, because a lot of the experiments as we talked about are not yet taking into consideration, which workloads are good candidates to move, you know, which workloads, you know, these data boards, you know, part of how we evaluated is we were just like, okay, let's take it from Germany to somewhere else and not think about these, you know, Policy issues that might make it so that we can only go from Germany to Poland, for example.

Philipp Wiesner: Yeah, I, I fully agree on that. I think the thing is that currently many results are still simulation based also our results. So you can easily craft scenarios where you can under certain conditions get 50% carbon savings by moving workloads from the night to the day or something. The only research that I'm aware of where like this was actually deployed somewhere, it was a Google paper that was publish.

last year or this year where they really have something running on their infrastructure. And then they report, like, I think it's 1% or something that they actually cut out. But this is then not only about jobs that were actually shifted. It's about like their entire workload of the entire data center.

And that's then already quite impressive because I think we're only touching on the actual potential of how much flexibility there is in many workloads. And I think this is also one of the biggest challenges in the entire field and how to identify how. Yeah, identified as flexibility opportunities for flexibility, and then especially how to make schedulers aware of these opportunities.

So most workloads are still black boxes in many, in many regards, you maybe have a deadline, but that's it. But there's a lot more information about workloads that would be good to have for cloud providers if they want to schedule them. For example, whether workloads are interruptable and what's the cost, the overhead of interrupting and resuming a work.

So for example, many machine learning trainings can take days. It's absolutely not uncommon. So if you know that interrupting these jobs is cheap and it often is because they do already do checkpointing. They write immediate results to the disc all the time. So if you can interrupt and resume these workloads, then you can really exploit it.

Short-term fluctuations in, in the grid. And.

Colleen Josephson: Yeah. I just wanna say that Philipp. I absolutely agree everything. You're saying really lines up with what we find. Really critical part of going from simulation to reality is this concept of how do you identify these candidate workloads? How do you do this? You know, figuring out what is good to move and, you know, maybe start off doing that by hand.

But in the long term, we need this to be automatic, you know? No. Human in the loop. So our systems need to be adapted so that we create a workload and it has some sort of metric for, you know, whether or not it's moveable at that time, how flexible it is. And so on.

Chris Adams: Okay. Now, if I, if we could maybe unpack some of this for the case of like these, these examples that you folks have just described, Colleen, you are, you're working at VMware. So. Whereabouts is this happening in the stack? Is it a product from VMware that is doing this stuff or is it somewhere higher up?

Would like, say Kubernetes, for example, is this happening at a hypervisor level or is there somewhere else in this? Cuz I know there are various parts of the stack that you could make an intervention. And I know there are examples of things like a carbon away, Kubernetes scheduler and where we work.

We've been doing some, we've got some P press open to Keta, which is an autoscaler specifically for Kubernetes to use some of this information so far. But I would love to hear cuz yeah, I didn't really know that much about VMware and it seems like there's a whole fascinating paper on this actually.

Colleen Josephson: Yeah. So I. We can't. I can't say this is like home to a specific product. I think one of the things that's things that's actually really nice about VMware is we have something called our 2030 agenda where we have 30 goals that we want to achieve before 20, 30 and sustainability. It's an ESG driven agenda and we have a whole bunch of goals related to sustainability.

So we've really taken the past couple of years since we announced the agenda to make sustainability kind of. In the core of what we do. So rather than having a sustainability office at the top, we wanna embed it and empower every single individual. Engineer. So we have these types of projects for moving workloads, measuring things they're kind of going on in a few different places.

So, and all I can really say for sure. You know, every company is a little hesitant to promise. Features for a specific project. we are very actively working on, we have engineers and project managers and researchers it's to be determined, kind of where it will emerge. But, you know, we've been talking with partners and stakeholders, and I think that it's been a very active space in a number of places in the company.

Chris Adams: Cool. Cool. Thank you for that. Okay, Sara did, there was something you would've come in on.

Sara Bergman: No, I would just say noting vigorously on, on a few things. So yeah, no, no. Additional to ask, add. Yeah.

Chris Adams: all right. So one of the things that came out of that was this idea of being able to, when you have a piece of work to be computed, it's either providing some kind of annotation or some, um, way of expressing that. Yes. It's okay. To pause me for example, or, yes, my I'm not I'm I'm not so urgent, but I'm, but as long as I'm done by this time, for example, So maybe have a few folks come across any kind of patterns that have actually that we might be seeing us wear, cuz based on what you folks tell me, it makes me think of the, the fact that I know when some of say Apple's work because they're now switching to a different kind of architecture that there is this notion of like annotating particular work that needs to be interactive to a user.

Something that might be a background thread. Do we have anything in the region of like a convention fr annotating stuff so that it's easy to. Especially. Yeah, go Colleen.

Colleen Josephson: I can't name any specific patents, but what this is reminding me of is telecommunications and I telecommunications as a deep. Degree of work in prioritizing data streams. And that's been a really active area for a couple of decades, you know, whether it's cellular or more traditional, just internet, there is this idea of delay in tolerant traffic versus more delayed tolerant traffic.

And there's a really rich body of research that we can look at and maybe borrow.

Philipp Wiesner: Yeah. Just one thing to add on that. I think what's special in particular about this scheduling problem in comparison to like scheduling problems that we had before is the, the time scale as well. Because like, if you talk about delay tolerance and telecommunications, And if we have like scheduling on a CPU level, that's really milliseconds or less or whatsoever, while in, if we talk about carbon awareness, like actually optimizing for the carbon grid, it rarely goes below 15 minutes that we have as a forecast and frequency, basically.

And if you optimize for, let's say your own solar panels, then you can maybe use satellite. in like the five, 10 minute scale, you can use like weather data for like a few hours or like, but if you want to go below five minutes, then you already need sky cameras. You already need like video information of where clouds move and stuff.

So when we talk about scheduling and delay tolerance here, then we do not really mean that we start a job and 30 seconds later we resume it. It's like bigger jobs that run for at least 15 minutes. And then we stop them for a few hours and then we resume them. So this makes it a bit different to, to what we have.


Sara Bergman: Yes. I, I think this is super interesting because it's, if you start optimizing for perfect scheduling, it's easy to fall into the trap or you're investing more or spending more into actually. The savings. It, it, you need to think of it as economics. So yeah, you can maybe schedule it down to every five minutes, which may be great for your application.

But if you then, like you said, Philipp need to buy a sky camera, need to invest in like your own satellite network. I'm exaggerating now, but you understand the, the scenario. Well then the actual car saving is at the end of it might be like net positive, uh, in which is a negative in this disregard. So any, any sort of carbon project that would.

Whatever, wherever it is, need to take in the totality, because we are just one, one planet. And I see the same similar discourse in machine learning a lot where people are very eager to use machine learning for solving climate problem, which I think is great. And I'm not saying we should stop doing that, but sometimes we're spending massive resources training those models to then save the world where we actually polluted the world more while doing.

Colleen Josephson: Yeah, that we actually made that exact point in a recent white paper. One of the other organizations that VMware is involved with is the next G Alliance, which focuses on telecommunications in north America is looking at, we're talking a lot about 5g. We've already got our site set on 60 and Microsoft is actually also a member with.

And I co-lead something called the green G working group, looking at how, how we can make our next generation telecommunications networks intrinsically sustainable. And there, there is a lot of excitement, like you said, Sara, about. You applying machine learning, but you have to remember this caveat that right now, training these models is really carbon intensive.

So you have to remember the resources you consume to, to get the job done. and kind of the same thing comes in with upgrades. So if you look at upgrading hardware and data centers or telecommunications hardware, so 5g, I think got some bad press for how much power the base stations consumed, but what's actually true about it is that the power consumed per bit transmitted has gone.

Significantly. So there's a good advantage to upgrading your hardware, but then, you know, what about this hardware? You're getting rid of everything that we produce has this concept of embodied emissions. It takes resources and carbon to produce this hardware. So you have. Really carefully look at that sort of trade off.

It turns out that keeping our devices, especially smaller devices in use for as long as possible is one of the greenest things that we can do.

Sara Bergman: Exactly because it's very tempting to look at only your share and try to slim it down as much as possible. But if that means you're just overflowing into other carbon budgets, you, well, the net effect. What you want anyway. And that might be a leap from how a lot of us are used to thinking about software, but I also think it, it like triggers that natural engineering curiosity in us all.

So it's not necessarily a bad thing.

Chris Adams: So we're talking about the embodied emissions for this, and I'm, I'm glad you actually spoke about the network part, cuz this is one thing that. , I don't have that much access to experts when looking at this. But, uh, as I understand it, for example, with 5g, there was an, there's a significant amount of embodied energy in making each of these towers, but would it be the case that you would have more towers which are more efficient, but have higher embodied emissions compared to what you had before?

Like maybe it'd be really cur I'm curious about what the kind of trade offs you actually might have to make there because as I understood it, 5g. Tends to have a lower range. Is that the case compared to say 4g, for example, or is it able to fill in some of those.

Colleen Josephson: There's different types of cellular infrastructure for different types of transmissions. So I think what you're touching on is this idea of these micro cells, which are, you know, you have millimeter wave, they have short range they're deployed in dense, urban environ. But that doesn't mean that cellular providers have stopped having these longer range communications it's, you know, kinda like how your cell phone has different types of communication for different scopes.

You have Bluetooth for short range, you have NFC for ultra short range, and then you have your cellular and wifi for longer range. The same is true of cellular networks for really dense urban environments. You're gonna have these smaller. Micro cells trying to give high speed coverage in these urban areas, but you still are gonna have these larger cells deployed across the United States so that you still have, you know, a good range in coverage everywhere.

But when we talk about savings in telecommunications, I think one of the really big opportunities that's getting, you know, off the ground rate now has to do. Software defined networking and virtualization. So historically in telecommunications, everything you needed was kind of in the tower and there's this massive movement for disaggregation going on.

So you can begin to pick and choose providers and move different parts of the cellular network around. So, you know, one of the things that we've talked about is, you know, there's the telecommunications industry and then there's I C T and. The line between the two is beginning to blur because a lot of what used to maybe happen only on a tower or in specialized hardware can now be done in general data centers.

So the two industries are really kind of solving the same sorts of problems , which is really interest.

Chris Adams: So, if I understand that correctly, you are saying that some of the. Hardware that was all bundled or some of the functionality that might have been bundled into a single piece of hardware is somewhat being kind of unbundled. So that maybe a cloud computing thing might be happening. And this is things like, would this be stuff like open ran or like the open radio network kind of stuff?

Colleen Josephson: Yeah, that's definitely getting into that area, openness so that you can have different modules communicating with each other. And there's a lot of really interesting opportunities there. So one fact for example, is incumbent like really kind of older and existing cellular networks. You don't have the ability to easily turn off a base station when it's not in use.

So these base stations are what are consuming the overwhelming amount. Of power in the network and kind of this really low hanging fruit is, well, when there's nobody around turn down the volume and the ability to have it more software defined means that we can try out these algorithms to dynamically do that.

Right now it's a lot of, if people have to turn things down by hand, do a lot of really onerous changes to implement this, but as we start to move to a more nimble infrastructure for 5g, And beyond, we really have this interesting opportunity to rapidly prototype these power savings algorithms and see what we can do.

Philipp Wiesner: So I see that in networking, we have plenty of opportunities for energy saving. But do you see any opportunities for carbon awareness? Because from my experience, there's not so much, you either have like edge infrastructure that is wireless and really energy intensive, but it's by design critical it's by design has to be fast.

Otherwise it wouldn't be at the edge like, well, on the other hand, you have the big data centers that have like a lot of patch jobs that are like very flexible and defer. But they are connected via fiber, which is super energy efficient and there's barely any, any consumption on that. So do you see any opportunities in that regard or is it mainly computing that we can make car aware?

Colleen Josephson: Huge opportunities. And that's one of the reasons I'm glad that we're thinking about six G right now is we have. This major to reuse the word opportunity to design this next generation network to be carbon aware from the beginning. One thing we noticed when we started this work is right now, we're historically figuring out how much power 3g, 4g 5g network consumes has been very much a cyclical thing.

You build the network and then you look back in time and then you make measurements. So you. When we were working on our first white paper, we actually couldn't even answer the question. How much energy does 5g consume there? There's just the work on that is ongoing right now where, you know, kind of, we realized as we were doing this, it's like, why.

And we, we have infrastructure to measure uptime, to measure latency. We already measure these things across basically every facet of computing. So why are we not measuring energy consumption and figuring out the carbon footprint? So we really need to design in this ability to measure how much power.

Part of the network is consuming and report it with really fine grain in a really fine grained way in a real time way so that we know as we're starting to prototype six G exactly how much power it's consuming. And as we deploy different pieces of hardware or different algorithms, did this change actually work?

Chris Adams: All right. So this is actually something that we're probably gonna cover on our future episode. In more detail, there is some really fascinating work going on from an organization, working on a protocol called the cion. Just like a clean slate implementation of their staff, where we work. We've been doing a bit, bit of work with them and we have an ongoing project.

So basically annotate every single public IP address on earth with carbon intensity information such that you can start creating some of these paths. But one of the problems you have with the existing internet is that. You BGP doesn't necessarily account for these different criteria that you might want to have.

So if you go back to this notion of saying, I have a job where I need to move something, you only have one dimension right now with BGP, whereas you can't really talk about saying, I care about latency more than I do about cost. For example, if I'm on a really important video call, whereas if I'm not doing a big backup or shitting a bunch of data, I might care about cost more than latency.

So there is some work had to taking place in here and. The only example I've come across so far and it is, it's totally worth a look. I look up, I think actually, cuz it's kind of interesting, but I hadn't actually thought about this in the context of six G actually to be before this phone call. I hadn't even heard of six G so I guess there's a whole, whole little bit to actually add.

So if I may, I'm just gonna touch on this notion of. delay tolerant networks. Cause we spoke about the idea of having different criteria for jobs and stuff. Now, my understanding of, of this is that there is a decent body of work already. You used for interplanetary networks. That's where some of this initially came from is that I think cuz if you, when I did, when I was doing some research before this, for this podcast, I found out that the, the NASA actually has a whole bunch of really fascinating research on delay tolerant networks and it looks.

The actual timeframes for what they look at in terms of delay, aren't that different from the time scales we've been talking about, like in terms of 15 minutes to an hour, for example, I'd be curious if any of you folks have come across any kind of overlapping research here, because the idea of using technology from space sounds kind of cool and it kind of worked for Velcro.

So I figured maybe, maybe there's some stuff that we could actually take use, make use of.

Colleen Josephson: Well, I don't know a whole lot about delay tolerant, networking in specific, but this is reminding me of another. Area called intermittent computing, which has also ended up in space. Uh, and this is basically taking inspiration from the really small and embedded side of things, which is, I think a theme in general of green software embedded devices.

Historically they have a much, much lower power budget than something that's in a data center. And as we look at these. Networks of, you know, kind of internet of things. We wanna put them in more, in more inhospitable places, outer space, farm fields, et cetera. And these places have no fixed power or communication in infrastructure.

So it becomes really challenging to figure out how do you budget? When, you know, you're, you're not sure how long you might have to run. So there's researchers, I think at the university of Massachusetts and Carnegie Mellon who have done some really interesting work in how you. Have this architecture that deals with the fact that you could suddenly, and gracelessly lose your operating power.

So how do you, you know, checkpoint things so that when for example, the sun comes back out, you can pick up where you left off and make progress and continue to do compute or fire out a packet. and some of this intermittent computing has ended up in outer space. I think there's a, a satellite that was created that takes advantage of some of these concepts.

Chris Adams: Okay, this is something that is totally new to me. I'd never heard. If someone did wanna find out about how to apply space technology to these kind of problems, where should they be looking? This sounds really, really fascinating. I did not expect us to go in down this direction, but it, it sounds lots of fun to.

Colleen Josephson: Well, the, the researcher who did these space, toasters is Brandon Lucia. I think also, I mean, looking straight at NASA and , they, they do a lot of research in this area and they have funding and I was talking to somebody who works there and they actually have a lot of work. That's not directly tied to space.

So that they do stuff on, you know, underwater networking, for example, is one thing I was surprised to hear that they work on. So I wouldn't be too surprised if there are researchers already there starting to think about some of these issues and how you can apply what they've learned from delay tolerant networks or intermittent computing to some of these challenge.

Chris Adams: I've just realized that if anyone who is interested in the context on the concept of intermittent computing, there is really fascinating art project called the solar protocol. And it's a really run wacky project, which is essentially it's a website, which is. Really a cluster of Raspberry, Pi's all around the world in different parts of the world.

And basically there is a DNS server, which basically roots requests to whichever Raspberry Pi has battery. And when the batteries are run out, the Raspberry Pi stops serving websites, but because it's always sunny somewhere, there's always the steady supply of this stuff. And the thing that's really interesting is that they have a hack day coming up on the 15th of August.

So if anyone does wanna play with this stuff, you can actually do it. It's entirely open. There is a whole set of really fascinating stuff. And at a recent conference called limits 2022 computing within limits. I believe they've actually, there's a paper for this as well. It's a really fun project. And we've written about it on a magazine that we publish called branch.

And if anyone has a cur has some curiosity in intermittent computing, this is probably the kind of most iconic and wacky idea I've seen so far. And it should, I imagine it might be a lot of fun for the people who listen to this kind of podcast.

Sara Bergman: I love it when it feels like my work is becoming sci-fi, but that's the best

Chris Adams: Well, this is like part of the idea, the idea that you can move work through time and space. When you actually talk about that, that sounds extremely science fictionally. And this is partly why we wanted to speak about this in addition to just the conversations about efficiency, because in many ways it does feel like it's sympathetic to a lot of the kind of patterns we might normally have.

You can think of. Say seasonal food is a bit like kind of seasonal electricity or something like that. Just on a much, much more compressed time scale.

Colleen Josephson: Yeah, this is starting to remind. Of some recent work that a friend and colleague did pat Canuto at the university of California, San Diego, he, and one of his students, they started, they started working on this concept of something called a junkyard data center where they're using old phones, nexus four and nexus five phones to kind of serve this Raspberry Pi role that you're, you're talking about.

And they found that they were able to kind of match and occasionally exceed, you know, modern cloud compute offerings. And this is kind of like, I thought this was neat because it's the intersection of what we were talking about earlier. If you've got this kind of low, low, high power computing, but then you also introduce at the, the junkyard part is getting to this, what we were talking about earlier with embodied emissions, these phones people don't use nexus four and.

Phones very much anymore because they're a number of years old, but they still can do really useful and powerful computing. So, you know, this is the reuse part of reduced reuse, recycle, where we still have these very good, relatively speaking sources of compute power. So how can we extend their useful life?

So that is a pretty cool piece of ongoing work also.

Chris Adams: Colleen, the stuff you mentioned there reminds me. A service from a company called Lanum, who basically use XX Hyperscale data center. And they put them into shipping containers right next to all kinds of solar farms and wind turbine places to essentially do this kind of interruptable, low carbon computing.

And in many cases, when the cost of electricity is say negative, for example, or when you are paid to kind of scale back your power, they've essentially got another way to. These kind of services. I think this is really fascinating when you're able to DEC decouple a bunch of these open source pro uh, ideas from necessarily a gigantic data center.

You, they, you don't necessarily need to have a massive out of town, big box Walmart data center to take advantage of these techniques.

Colleen Josephson: Yeah, this idea of kind of sustainable. Computing has really also gotten a lot of attention. VMware has its own project where in that's reminding me a lot of this in collaboration with vapor IO, where we kind of have a container data center and we're looking at these sorts of savings. So I think there's a few instances of this sort of work starting to happen.

Chris Adams: all right. Sounds like lots of stuff for us to add to the show notes that we have here. If there's any kind of links or podcasts or projects that you'd like to draw people's attention to. Yeah. What's caught your eye recently that you'd say in the context of the podcast, that you'd suggest people look.

Philipp Wiesner: What I personally found quite interesting is recent work from. Monica vital from the poly technical to Milano in Italy, who looks at this entire topic of sustainable computing very much from an application side. So basically looking at, we have a certain business process, for example, where different components in our microservice architecture have a certain purpose.

And for example, like Sheena papers talks about flight booking process, and then maybe certain aspects like certain components of this pipeline may not really be necessary. They might add revenue. To the operator, they might add quality of service or quality of experience, but under certain times or conditions, we could trade this quality of experience or quality of service.

to consume less energy, or maybe we have different implementations of certain aspects of a system. I mean, I think this is really, really interesting work to think that maybe we can, if we like go deeper in the applications of how applications should be designed. So actually changing the code of software, like to refer back to what is at, in the beginning, we can actually trade some qualities of software for energy reductions during certain times to be to better align with the actual availability of renewable energy.

And I think that's really interesting.

Chris Adams: Cool. Thank you, Philipp. I definitely need to get that as a, as a link for the show notes. Sara. You've got something here to add as well. Right?

Sara Bergman: Yes. I've been reading a blog post called the dirty carbon secret behind solid state memory drives, which is a very enticing title, but it's about it ties into this episode because it talks. The trade offs between the embodied carbon and the lifetime emissions from using energy. So I'll link it. I, I thought it was insight.

Chris Adams: Okay. Colleen, what's showing up on your radar these days.

Colleen Josephson: I think I already mentioned it, the junkyard data center. So I'll find a link for that, cuz I thought that was pretty neat work. And then I'll also try to find the piece on the, the intermittent satellites.

Chris Adams: Cool. All right. Okay. In that case, I'm gonna share a couple, couple of mind, and then I think we're gonna wrap up the thing that I'm really interested right now in this particular context is a carbon aware branch of nomad, which is the alternative scheduler from Hashi called. It's very, very similar to Kubernetes, but somewhat simpler.

There is now a carbon away branch that actually does include some of this for its scheduling decisions. And that's something that I'm really excited about at the moment. And there's also a bunch of jokes around low carbon etes. Instead of Kubernetes, these days, we've been doing a bit work to build a specific CLI go based CLI to plug into tools like this so that we can.

A carbon aware, any version, any, any carbon aware cluster, any, any cluster you run should be able to be doing this kind of stuff. That's the thing I'm, I'm gonna be adding to the links here. All right, folks, I've really enjoyed this. This has been super nerdy, but that's basically why people sign up and listen to this podcast.

And I really appreciate you sharing your time with us. So folks, thank you very much for this. Just before I go. If people wanna hear more about your research or your work, where would they be looking.

Philipp Wiesner: Probably just Twitter. So just first name, last name without any just Philipp, Wiesner without any thoughts or anything.

Chris Adams: Cool. And Sara

Sara Bergman: Same, you can find me on Twitter. It's my name with an E in the middle.

Chris Adams: and Colleen

Colleen Josephson: Yeah. I post a lot on my website, see ColleenJosephson dot net. And I also share some on Twitter, which is see Josephson full because see Josephson was taken

Chris Adams: I could definitely identify with that. And my name is Chris Adams. Chris Adams was taken. So I am Mr. Chris Adams, which is @mrchrisadams. All right, folks. Thank you very much for talking to us about green software and carbon aware and carbon intelligence software, and hopefully we'll see you on future episodes.

Thanks folks. Bye.

Philipp Wiesner: Thanks a lot.

Chris Adams: Hey everyone. Thanks for listening. Just a reminder to follow Environment Variables on apple podcasts, Spotify, Google podcasts, or wherever you get your podcasts. And please do leave a rating and review. If you like what we're doing, it helps other people discover the show. And of course we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit Green Software Foundation that's Green Software Foundation in any browser. Thanks again, and see you in the next episode.