Environment Variables
Green Networks
September 12, 2022
Environment Variables is back! Chris Adams hosts our Green Networks focused episode and he is joined by Eve Schooler, Principal Engineer and Director of Emerging IoT Networks at Intel and Romain Jacob of ETH Zurich. They discuss how can we reduce the energy produced by networks? How could we leverage current research to make the internet more energy efficient?
👉 State of Green Software Survey - click this link to access! 👈

Environment Variables is back! Chris Adams hosts our Green Networks focused episode and he is joined by Eve Schooler, Principal Engineer and Director of Emerging IoT Networks at Intel and Romain Jacob of ETH Zurich. They discuss how can we reduce the energy produced by networks? How could we leverage current research to make the internet more energy efficient?


Learn more about our guests:


Episode resources:


Talks & Events:

Papers:
Open Source Projects:


If you enjoyed this episode then please either:

Connect with us on Twitter, Github and LinkedIn!

Transcript Below:
Romain Jacob: In many internet service provider networks. So kind of the edge of the internet, where we have strong, seasonal patterns into traffic, they are allowing fruits. There are many of those small networks. So the benefit you can get there actually add up pretty quickly. And if they, if they don't seem.

Interesting. If you look at a single network, if you apply those principles everywhere, you can achieve very large effect. And that's something, every network operator should, should have a look at if only to reduce their energy.

Chris Adams: Hello, and welcome to Environment Variables brought to you by the Green Software Foundation. In each episode, we discussed the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams.

Hello, and welcome to Environment Variables. The podcast about green software. I'm Chris Adams, your host today, I'm filling in for Asim Hussain. And on this episode, I am joined by Eve schooler of Intel. Hi

Eve Schooler: Hi.

Chris Adams: Eve and Hama, Jacob of et Zurich in Switzerland.

Romain Jacob: Hello, everybody.

Chris Adams: And today we're gonna discuss the levers available to us for greener networking.

Now, if we go by the figures from the international energy agency, data networks used around 250 terror hours of electricity in 2019. And while we don't have the figures yet for 2022. This same agency is projecting a estimate of around 270 terawatts to be used by the end of this year, which for context is more than the entire electricity usage of Germany, the fourth largest economy in the world, this results in a significant environmental impact.

And thankfully, we'll be talking with some people who've been spending a lot of time thinking about where the biggest levers are to do something about this in the context of the climate crisis. And they'll be sharing their research on how we can end up with greener more sustainable networking. But before, before we dive into the specifics, let's do a quick round of intros.

So maybe you can introduce yourself and your work at Intel.

Eve Schooler: Hi, thank you for inviting me. I am a principal engineer and I'm a director of emerging IOT networks at Intel. And my current work focuses on evolving the internet toward a sustainable edge to cloud infrastructure. My background in expertise is primarily in networking and distributed systems. And although I've spent much of my career in industrial research, I currently straddle a business unit at Intel called the network and edge business unit.

And as well as the corporate strategy office, where I'm responsible for sustainability, innovation and standard. And I'd say that something that's really colored my experiences that I've spent much of my career heavily involved in internet standards and standards bodies, such as N and the I E TF, which is the internet engineering task force.

Where earlier in my career, I developed control protocols for internet telephony and multimedia teleconferencing, but at present I'm heavily involved in and leaving a working group focused on deterministic networks and their extension to operate in. Networks. I'm also actively involved in the open group's open footprint forum, and that aims to standardize the carbon footprint data model.

So you can hear sort of my internet hat as well as the sort of sustainability hat, both coming to the, for.

Chris Adams: Cool. Thank you Eve. And thank you once again for getting up at the crack of Dawn to join us from California today. Okay. I know it's somewhat more sociable time, so maybe I'll just give you a chance to introduce yourself and then we'll dive into some of this.

Romain Jacob: Sure. So. I've been studying at et Switzerland since seven years for now, the first five I spent during my PhD on lo wireless communications for embedded systems. So the, the question of how to save the energy was, was kind of core to everything I was doing there. And after I graduated in 2019, I moved on to more internet.

And most recently I've been interested into how can we reduce the, the energy consumed by networks in general, with a focus on wired networks. And I'm trying to see to which extent the concept of low power wireless networking could be translated into wire networks and how we could leverage that to make the internet more energy efficient.

Chris Adams: Cool. Thank you. And for context, I found about raw math through the conference called hot carbon, which is a kind of energy and carbon nerd conference online where his paper. The internet of the future, that should, what was it? The internet of future will grow old and sleep more. Was, was that something like that?

That tickled me. I should introduce myself. My name is Chris Adams. I am the, uh, I am the, the head of the policy group at the Green Software Foundation. And I'm also the executive director for the. Green Web Foundation, an NGO based in the Netherlands campaigning for a fossil free internet by 2030. And now, you know, all our names.

Maybe we should jump into the actual topic of greener networkings. So Eve I, I first came across your work with, uh, one of the, with a recent paper called towards carbon away networking. And in that paper, There was actually some useful information about setting the scene in terms of how much of the internet, or how much of the tech industry, what, what, how much network makes up of this compared to say data, data centers, and computing.

I wondered if you might be able to just expand on some of that, because this is a useful piece of context. And previously we've spoken primarily about data centers rather than the kind of aggregate impact.

Eve Schooler: Absolutely. I mean, in the press, we hear a lot about how data centers are consuming the world in terms of their energy usage. And it's, it's interesting because there's studies that suggest that networking is as large or larger. Than the data center in, in terms of its consumption. And there, even when you look at, when you dig into the numbers, a little further networks have been estimated to consume as much as one and a half times as much as data centers and, and even within data centers, networks already account for between 10 and 20% of the energy there.

So those numbers set the context, which is why it feels like networking deserves some further investigation. And solutions.

Chris Adams: So, this is one thing that I might, I might ask you a romantic come into, come into on this, because previously we've heard while there are tools, like say CO2 JS or websites like website carbon, which will give you an idea of the environmental impact from say, What looking at a website or you'll see stories about things like say the environmental impact of watching say Netflix, for example.

But as far as I'm aware, the actual energy usage used by networks tends to be relatively, uh, has historically been run something which has relatively stable compared to the other usages from that. And I wondered if you might be able to expand a little bit on that part there, like, is there scope for a change or is it just a static figure that we have, no matter how much we use the internet.

Romain Jacob: Yeah. So there is two, two points to this. What, what, one thing, which is very true, that relates to what you were saying is that the, the energy consumed by networks at any point in time. Say in a time span of a year or months then to be fairly constant. So there have been a number of studies that shows this, that the energy consumed by the network is essentially independent of the load.

So if you are using 10% of the capacity or a hundred percent, essentially the same thing, the, the reason for this, so. The stable number, um, has increased over time as we've scaled up the networking infrastructure, but for the, for giving infrastructure, the energy you consume. So the power you draw at any point in time tends to be fairly constant.

That's, that's kind of worrying because we are typically operating very far from the hundred percent point. So we tend to over provision our networks, meaning we want to make sure there. Much they are capable of much more than what we typically ask, which means that we essentially use a lot of energy all the time.

Whereas we are using the infrastructure fairly little.

Chris Adams: So if I follow what you're saying, this is a little bit like maybe 10 or 20 years ago before kind of pre-cloud where you might have. A big fat, chunky server that you have and you plan for the maximum capacity. And as a result, it may be that if you look at say the usage you have there, because you can't really scale that server down, you've got that same.

You've got that kind of relatively core grained amount of energy usage. Is that the model that is actually helpful to think about when we are looking at the net network usage.

Romain Jacob: Kind of, yes. So when you talk about compute and you kind about what servers are doing a lot of, depending on the workload you're, you're actually working with, but. You can scale up and down the power, depending on how much computer you're producing. Whereas when you're looking at networking, it doesn't really work like this because you have very little compute that actually happens in the network.

What, what the network consumes energy for is to powering the memory on which you read the, the routing information, for example, in the optics. So reading packets in and out and all those things. Have are essentially dominated by idle power, which is the power you draw, just to turn things on.

Eve Schooler: I wanted to make a big distinction, which is that much of the core network has this property that whether or not you've got high usage, you know, lots of packets flowing across it or not. It is gonna have this constant amount of, uh, draw energy draw. But the wireless network. Inherently has was taught from the beginnings of its design to be fairly adaptive.

So I think that's the distinction. One of the distinctions being made here is that wireless and wired networks behave quite differently in the face of congestion or, or even just traffic on the network.

Romain Jacob: Yes, it's very true. Very true.

Chris Adams: Okay. So if I, if I'm were to apply some kind of mental model for this, you might think about kind of like backbone networks as almost like kind of current Deion, current, constant the entire time. And then as the closer you get to the surface or to. End users. You might have a bit more kind of spiciness going up and down.

And that's like a way to think about where some of the levers for reducing. Impact might be. So if we're speaking about consumption and that gives us some way to think about the energy used, there is another kind of source of leverage, which is the carbon intensity of the energy itself. And as I understand it, Eve, this was some of the work that was presented at hot carbon.

And some of the work that the paper that you've been con contributing to towards carbon away, networking, maybe you might expand on some of that because there are some really fascinating ideas I found in that.

Eve Schooler: Sure. I mean, as you alluded to earlier, there's been at least in the data center community, an awareness of what is the quality, if you will, of the. That is being drawn from the socket. Uh, and what I mean by that is what is the carbon intensity? How low a carbon intensity can we get towards using clean energy or renewable energy?

So the lower, the number, the better and data centers in recent years have begun to experiment with. And now are operationalize the idea of time and space shifting workload. To align with the availability of clean energy. That's interesting for a, a bunch of reasons, the most important of which is that as Roma was saying earlier, the, the footprint for data centers and I C T you know, information, communication technology continues to grow.

And especially in the face of all the increased amount of data that we're sending across networks. And so pairing a data center with renewable energy. Enables us to reduce the carbon footprint of those data centers as they consume more energy. But similarly, in the electrical grid domain, we also have more and more integration of renewables and in places like California, which is where I'm based and in Germany and other parts of the world where that integration is happening quite rapidly.

There are parts of the day where there's way more renewable energy than we can possibly consume. And so it just gets dropped on the floor. It gets wasted. And so there's been this lovely pairing of, you know, we've got an entity that's consuming a lot of energy going to renewables and the renewables, creating excess and looking to somewhere to consume that.

So if you can think of compute as load balancing or as being virtual batteries for the data centers and it begs the question that if network. Are using one and a half times as much electricity. Why aren't we using those same techniques in networks? And so there is this growing awareness of where are the places where we can put renewables in order that networks are consuming cleaner energy as well as can we, and is it worthwhile to time and space shift the transmission of our network loads in order that they have a smaller carbon.

Chris Adams: Okay. So this is actually quite interesting to me for a number of reasons, because if I understand the most common tools we might use for. Hacks of data around the world. We don't have that much control ourselves directly. So I might send something to the next hop, but there's something like the border gateway protocol that decides where the next hop is and so on and so on and so on.

So I might have some indirect control there. And there are say, clean slate attempts to redesign parts of the network, or even introduce a notion of kind of path awareness from connecting. Say something you have now here to maybe a website. So you could take a kind of greener route like you have here would either of you have anything you might could so share there because.

As I'm aware things like the border gateway protocol, BD BGP has maybe one main criteria that you have here. And it sounds like we might want to be able to use multiple criteria. Like I care about latency, but I also want to balance that with carbon intensity, for example, or even cost I'll open it up to see if anyone has anything they might wanna share here that might.

Romain Jacob: Yeah. So, as you mentioned, the GP is kind of like the glue that, that connects the internet together, and it is been suited how to extend it, improve it and change it. Over the past, I don't know, 30 years or so various directions for various objectives. Usually security is the main concern that people have with BGP.

But most recently there have been some different additive different approach to go away from BGP. One example of that is the Zion network or networking principle that is also coming for me. That that is trying to let the end. Pick which route the, the traffic should go through in the internet. So it's an idea that is generally known as source routing.

So like the, the source of the traffic should say, I want my traffic to go through this network. Then this network, this network in until I reach my end point. And once you have this tool, so this is good for security purposes, but if you have this, you can also use it for carbon awareness. You could also say, I prefer to go through California because they have a lot more renewable energy rather.

I dunno, some other state in the us, they may not have as much,

Eve Schooler: Like Virginia.

Romain Jacob: for example.

Eve Schooler: West Virginia.

Chris Adams: Yeah.

Romain Jacob: it empowers you to do this. If you want to do that, it is possible.

Eve Schooler: Another way to have a mental model about this is network performance has often been categorized or has attributes like latency, like packet loss, like jitter the variance in the latency. As metrics for the success of transmissions across the network. And so the idea is how to teach things, teach protocols, whether it's BGP or other parts of the network fabric, and even other parts of the network stock about carbon intensity.

So that there's this so that it is carbon intensity is another metric. That is a first class metric. In the selection of these routes, whether it's routes or whether it's software usage or whether it's scheduling. And so in some ways we need to teach many of the protocols that we know and love in the internet about these additional options so that we can do joint optimizations, or we can create source routes as a Rama was suggesting.

But it, it is really a very simpatico with this idea. Deterministic networks in the small, some of the work that is being done around time sensitive networks, for example, is all around selecting paths or subnets that have the lowest latency and even creating multiple paths in order to ensure that packets get delivered in time.

But what if the constraint that we really wanted to optimize for in certain circumstances was the carbon intensity and it really. Also leads us to ask, you know, how do we educate all of this software? That's out there to not only carbon intensity information about the loca, you know, carbon intensity is very location, time and space specific, but how do we also enable our applications to say how time elastic they are in order to be shifted around or delayed?

So they're both issues to.

Chris Adams: All right. So if so, what it's, what this seems to be speaking to is this idea of moving from maybe just one set of criteria to why this set. So for example, If I cared about latency more than cost, I might care about things like say, if I'm doing a video call in Australia, I might be prepared to care more about latency than the cost of say for carbon.

And if I cared about say, making, doing a download of Netflix for, for a video, I might, if I'm not gonna watch it right now, I might say, well, I care more about the cost and the throughput, not, not latency and making it go through a kind of green route. So I would rather have some. Low carbon internet trick shot bouncing through the greenest possible places to end up on my computer for when I come home tonight, for example, or something like that, that seems to be some of the directions and this might be heading towards.

Okay. Wow. That's quite exciting actually. So this also speaks to these ideas of. Maybe changing how we might design software in the first place and having different tolerances. Maybe if you might be to speak to some speak to this bit, cuz you, you mentioned this phrase, I haven't come across before time sensitive networks.

So maybe you could expand on some of that and some of the delay on, on the flip side of that, which presumably will be delay tolerant networks for this.

Eve Schooler: Yes, time sensitive networking community is for example, some of the work that I've been involved in in recent years comes out of the internet of things group at Intel and in particular, the industrial internet of things. Context where you've got control systems that have very, very low latencies, less than a millisecond, for example.

And so you're talking about subnets, very small networks, but some of the work in the IATF is about, well, how do you across factory floors? How do you enable them to be time sensitive across subnets, which may have different underlying technologies. And so all of them need to be taught sort of how to do this now in the time sensitive networking world, often one of the strategies, in addition to, as I mentioned earlier, multi-path.

Having multiple paths by which packets can go between sources and destinations. So there's redundancy for reliability, but there's also the reservation of resources along the path. And for that, it starts to look a little bit like what we were talking about when Roman was referencing the cion work. What if you knew how much time it took you along each.

And you had a certain budget. Well, you could send out a query message between a source and a destination to understand along the way, cumulatively, how much latency am I going to encounter? Reserve enough buffers in those queue. And eliminate congestion along that path. And that's sort of what time sensitive networking in the small has been doing.

Now, the kinship that it has with delay tolerant networks is that we wanna expand these time sensitive networks. We wanna teach them about energy usage and energy awareness, carbon awareness, but these delay tolerant networks. Back to the data center analogy, data centers are shifting their workloads to align with when the sun's shining or the wind is blowing.

And, and so they're holding onto their workloads. There's been a longstanding project in the networking community around delay tolerant networks that have been designed primarily for deep space. And because, because you, you know, routers come and go because planets align in certain ways or satellites align in certain ways and they're not always there.

And so that's why they have to be delayed tolerate. They have to there's this. Dynamic of the availability of the resources. And so the question is, could we be using delay tolerant, networking in context for more than just satellites, uh, in this context where we wanna align with, with the availability of clean energy.

Romain Jacob: Yeah, no, I. I totally agree with what Evo was saying and this, this idea of data or more de tolerance, more tolerance in general in networking is necessary to progress towards more energy efficiency or carbon efficiency. This is essentially what wireless networks have ever was saying have been doing forever in wireless.

How do you save energy while you keep things off for as long as possible, right? You, you just, you make, try to make sure that when you turn on your radio, it's. Achieve something useful and then packets, well go through and as efficiently as possible. And if you think of it, it's extremely easy to push the energy efficiency, right?

What do you do? What you, you turn off for 90% of the time, and then you schedule very tightly the time where you stay on the problem is that that induces delay, right. And your application is to be able to tolerate that delay. And embedded systems IOT, all this work, this, this field has been working in different tradeoffs to play with this so that the, the application performance does not take great too much due to thes by the networking part.

And that's like the story of what I've been doing during my PhD. The, the problem is that the internet networks, the wire networks, they've been building a different paradigm. It was. All about reliability. It's, it's, it's been designed to be as reliable as possible. Like if we have a nuclear war, the internet should still work.

Like that was the initial idea, right? So we need to make sure to provide all level of reliability possible to sustain anything, but we need to get away from this now because the cost of this is that we over provision everything. We have a lot of redundancy and we use very little of that. So some of the things that I'm, I'm thinking about together with several colleagues now, is that okay?

What if we were to redesign indeed those wired networks, so that reliability. It's not something we, we get rid of, but we modulate the requirements we set there and say require reliability is just one objective. How much performance degradation are we willing to tolerate in order to save an energy, to give a very concrete and simple example, most traffic on the internet is driven by.

Human activity. Right. And human activity has a very clear seasonal pattern. We, we use the networks more a certain time of the days and not at others. It's very easy to, to think that we could turn off part of this networks for certain part of the day, because we don't need that much bandwidth. And if we do, we might be able to tolerate a bit more delay than, than at peak hours.

It's very similar to turning off the public lights on the streets, you know, at night when nobody's driving, right. It's the same principle.

Eve Schooler: Or even in your home, right? The analogy of one's parents growing up, don't forget to turn off the lights. It's exactly the same analogy.

Romain Jacob: Yeah, it it's, it's the same idea. Right? And there is no reason this gonna be done. I we know we can do it. The question is, how far can we push it? And, and one, one limit limitations factor. One blocking factor at the moment is how quickly we can turn things on and off, uh, because turning, switching on a router or switch.

Takes as of today in the orders of several minutes. Right? So it's not something that you can just do multiple times per, per, per hours or so, because essentially your network will be completely unor. It can be changed if we were to change the hardware. If, when to change the operating system, we run on those machines.

We could improve on that. How far can we go? This is kind of an open research question at the.

Chris Adams: That's really, really helpful. And thank you for explaining it in that way. I presume this is the, the, the internet must sleep more part of your paper where the internet must sleep more and grow old. Right? What you're talking about here is actually the idea of things. Not necessarily being away there all the time, or the idea of liability, moving to different parts of the system is actually quite an interesting one.

And one we've seen with the cloud.

Eve Schooler: And actually there are a couple of seminal pieces of work that I think we can look back on as really setting us down this path. So for example, you know, Mati Gupta's work in 2009 from SICOM was all about, you know, energy efficiency and, and beginning to examine. How much could we save if we began to turn things off, another interesting paper that's been influential is Dina, Papa Gino's work, access points.

And whether parts of the wireless edge network could be, you know, in internal to buildings be turned on, on demand. So it's sort of the opposite idea of like turning things off, but it's like you turn things on, on demand. And as romance suggested, we understand the patterns. Usability of those things. We know when people come into buildings, whether they're in their homes or whether they're in their offices.

And so an on demand infrastructure at the very edges of the network makes a lot of sense. So those are two pieces of work that certainly have influenced my views on, um, teaching devices, how to sleep.

Chris Adams: Okay. Cool. And this idea of matching demand to supply Eve, you mentioned about kind of speaking in California. I mean, just this week we saw a really good example. Demand response where you saw like a Kaiso the grid operators basically say, Hey folks, we're about to kind of hit a blackout. Can everyone please just turn things down a little bit.

And if I understand it correctly, we were able to see basically one of the largest grids not fall over. And this is like an example. Yeah. The, you don't only just have to think about supply by the sounds of things.

Eve Schooler: And it raises an interesting question because how much of the network involves the user? How actively attentive are users when they route across the network. Right now there's very little engagement. So one of the issues that we probably need to solve is creating as you were referring to them, levers at different points in the architectural software stack, and even in the hardware.

That allows different levels of involve. For users that have different capabilities or interests in enacting those levers all week, we have been receiving warnings about the, the coming temperature. You know, it was 109 unheard of here. It broke all records the other day, but that was through constant messaging that we were asked to please be considerate during the.

In particular between four and 9:00 PM, I guess that's when people begin to come home and turn on their air conditioners, the network doesn't do that. it. Doesn't ask you to be thoughtful, but maybe our software and our software development practices need to incorporate this.

Chris Adams: Thank you for that. So for other listeners, we did cover some of this in episode nine, where we speak about carbon aware computing and the idea of annotating say jobs for Kubernetes or other schedulers to basically say, yeah, I can wait a little while I, you know, I'm important, but not urgent, but it sounds like a well there's.

This reminds me of a blog post by a guy called Ismail Philco he's in the climate action dot slack. And he's been speaking about the idea of, is there a chance to extend some existing protocols, like say we have open API for describing how APIs work on the Web. And there is an ay API, which is another way to say that, which is, as far as I'm aware, is used for lots of kind of programming tools these days, as a way of saying.

To do the same thing, very synchronously. And there he's, there's some work there to basically extend this with this notion of delay tolerance or location tolerance, so that you can basically say this thing is important, but it's not so urgent in the same way that with an operating system, with apples, for example, you can annotate particular tasks to either be returned very quickly for high latency when there's users or something, which might be better suited to a low, a low power core in a computer.

So maybe this is actually thing I wanna ask is right now we spoke about some kind of cool future things. If I could bring this to some of the stuff that's happening right now, these days, if people are listening to this and they want to do something or start playing around with some of these ideas, where should people be looking?

What kind of software is out there? What kind of tools, uh, exist for people to kind of experiment with some of these ideas to play it with their own time, or even possibly build some cool new services on top of, for example,

Eve Schooler: A couple of thoughts, at least one is that at Intel, there is a power, a dynamic power management. Solution that exists called speed select technology. And it does allow you to dynamically adjust the frequency of cores. And there's some interesting description of that technology at the most recent I C N.

Conference in 2022 in a joint paper with British telecom, there was a paper on NFV and energy efficiency describing that service. But for developers, I would say some of the most interesting APIs I've come across are from there. There are quite a few offerings to get carbon intensity information from the electrical grid, but use it in computing systems.

And some of the interesting, uh, APIs are from Watttime. And electricity map. And so I would say you could play with those to see, you know, whether you want to incorporate your both carbon intensity and understand what the patterns are of the carbon intensity, where you reside or where you want your workloads to reside.

I also wanted to put in a plug for a workshop that's coming up. That's being hosted by the internet engineering task force on environmental impact of internet, applications, and systems and the deadline for putting in publications, romance.

Romain Jacob: I'm aware.

Eve Schooler: It's the end of October and with the conference, with the workshop happening in December.

And then finally, one of the things that's been, I wouldn't say bothering me so much as frustrating me, is the long lag time between. Our assessment of the overall internet footprint and the time that goes to publication, there's a small group of people who diligently publish these assessments, but it's really backbreaking work to understand where are the pain points in the infrastructure and topology.

So I would provide a call to action if you will, for networking researchers involved in the internet to help speed. Accurate and timely assessment of the networking and, and overall ICT energy usage by participating in and contributing to these it documents. It's called L dot four seven, but it it's its name.

Human readable name is the G HG for greenhouse gas. Trajectories for the I C T sector. So if you have some insights into the pain points of the Internet's energy usage, where we could be more efficient, turn things off, age things longer, be aware of carbon intensity. We'd like to hear from you.

Chris Adams: When you spoke about that, that does remind me that. So this is the Green Software Foundation podcast, and it's worth, I would be remiss to not mention that there is a green software carbon aware SDK, specifically that. Apps, some of these APIs that you're able to use. I think it's primarily written in.net, but I believe there might be some a go build of somewhere this as well, independently of this, the organization I work for the Green Web Foundation, we've built a CO2 JS, which basically has, uh, a lot of the kind of carbon intensity figures inside it now.

And also grid intends to go, which is a Golan library specifically designed to allow you to essentially, again, wrap these APIs and use them in scheduling tools. The other work that might be worth being aware of is that there's some work with ripe, which is the people who issue IP addresses in Europe.

They've been funding us. Our NGO to basically annotate every single public IP address on earth with carbon intensity information. So if there's a chance to build some of the green routing stuff, yeah. You, we have an API which is API to carbon intensity from our organization that will give you some figures for free, but these are annual.

These are not gonna be fluctuating or updating the way that, what time and some of the other providers do. There's also some work from singularity. Who've also, who've recently. Started sharing some information and hourly resolution all across America for people to be looking at this.

Romain Jacob: One point I wanted to make is that I do agree that this, this question of carbon intensity and carbon awareness is important. For sure. We need to be able to improve on that metric, but we should not forget that at the end of the day, the best energy is the one we do not consume. And so we should also keep investing efforts into being more energy efficient.

While keeping in mind that consuming less energy, if it's energy has to be more carbon heavy is not necessarily the best tradeoff still. We should. We should not. We, we should look at the low hanging fruits in, in reducing the energy we consume for the current service the networks are providing. And I mentioned earlier already today, this, this studio of seasonality, the analyzing the level of redundancy that are existing in network.

I think in many internet service provider networks. So kind of the edge of the internet, where we have strong, seasonal patterns into traffic, they are doing fruits and as F paper of hot carbon was mentioning, there are many of those small networks. So the benefit you, you, you can get there actually add up pretty quickly.

And if they, if they don't seem. Interesting. If you look at a single network, if you apply those principles everywhere, you can achieve very large effect and that's something, every network operator should, should have a look at if only to reduce their energy bill.

Eve Schooler: I think you're absolutely right. Roman, I think there are three things to consider. Actually, when we think about green networks, one is first and foremost, this energy efficiency use less. If you're gonna use energy, ensure that it's decarbonized, but then there's this third facet, which we haven't even touched on, which.

The other environmental impacts, whether that's water or toxicity or air pollution, whatever it is that also need to be somehow captured in metrics as well. And ultimately comprehended.

Chris Adams: We're running short on time. So I'm gonna ask one question just because it's very rare. People who understand the network to help answer this question. So for people who might know people who feel bad about say watching Netflix or feeling guilty about being on video calls would either you have something to say to people who might be struggling with this to maybe put their mind at ease or help kind of come up with a mental model.

Like, should they be feeling bad about the environmental impact or the video calls they make, or the videos they're watching after.

Romain Jacob: Uh, I don't think, uh, trying to make feel people guilty will change anything. People don't have the levers to change anything like most, most individuals like you and me. I mean, you're you open your laptop, you have a service provider and you don't have any control. I mean, you can choose provider a or provider B, and they may sell you some broad characteristic of the internet connection they provide you with.

But I mean, you. Monitor this, if you're an internet geek and you care about these sorts of things, but you cannot truly influence where it goes. It's not exactly true, but in practice, the individual has very limited control. The network providers do that's that at this level, then that things need to be, need to be changed.

Now that being said, you can still do so. Right. You can, for example, just be considerate before consistently streaming and uploading to clouds, gigabytes, and terabytes of, of pictures and multiple the providers, because you care that if Google goes down, then Facebook is not and you get to access your things, you know?

Yes. But how many people are actually doing this? I don't think so. I don't think so many.

Chris Adams: Okay. So there we have from someone who's on the far stages of their PhD and Eve, is there anything you might add for people who are wrestling with this particular issue themselves? When they open up zoom to speak to a loved one or anything like that themselves?

Eve Schooler: I think it's like anything else in our lives, we need to be ACC acculturated to thinking about this as an issue. I don't think we should have that much guilt about it, but we should be thoughtful. And so if it doesn't make a difference in the, in the communication to have the video, when you're just a participant versus a speaker, or if you can do low Def versus high Def, those are really easy decisions.

And I think there will come a time when. People will be asked to fit within budgets of carbon footprints and companies and so forth. And so we'll have to do our part. So we should be getting in the habit of at least thinking about these things. But I, as others have said, we don't have that many choices except on or off, it's sort of a bullying choice.

Maybe, you know, one, one resolution or not. And, and something about teleconferences is we save a tremendous. Using teleconferencing technology over air travel and, and other forms of travel. So it's, you know, incrementally, we're getting more and more efficient as to all those Netflix that we're watching.

that's, that's another concern again, maybe we're, we'll be given a budget in time.

Chris Adams: All right. Okay. Thank you for that. From what I'm, from what I'm hearing, it might not be the case case that streaming is indeed in you flying. So that's one thing that we could take into account. All right. We're just gonna wrap up now for people who have enjoyed this and want to learn more, where can people find you online or where should they be going to learn more about the works that we've discussed here?

Romain Jacob: Yeah, so you can find, find me on my website, homo.net. This is where you will find most work related updates. Otherwise, with my name will be easy to find on Twitter. I will not read my handle because it's unreadable , but uh, my name works fine. I tried it before.

Chris Adams: Okay. Excellent. All right. We'll be adding your, the links in there. And Eve, if people have been interested in some of the stuff you've been talking about, where should they be looking?

Eve Schooler: Similarly, you can find me@eveschuler.com and linked LinkedIn as.

Chris Adams: Okay. I'm really glad folks. I've really enjoyed this session. And I think there we've covered a lot of really quite helpful ground for other people who are wrestling with this. And I'm just like curious about this for the listeners. Thank you very much for listening to Environment Variables. All the resources for the podcast will be available at podcast dot Green Software Foundation.

Along with copious show notes, with all our links for this. If you did enjoy this, please do write a review on wherever you've hear in your podcast. It really does help us find new audiences and yeah, that's us. Cheers folks.

Romain Jacob: Bye. Bye.

Eve Schooler: Bye

Chris Adams: Hey everyone. Thanks for listening. Just a reminder to follow Environment Variables on apple podcasts, Spotify, Google podcasts, or wherever you get your podcasts. And please do leave a rating and review. If you like what we're doing, it helps other people discover the show. And of course we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation that's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.