9.1.2025 — Zayo is a communications infrastructure provider focused on North America and is investing in the space for the long term. The demand for larger chunks of capacity is driven by the use of artificial intelligence and the need for more efficient processing. The demand is driven by the use of artificial intelligence and the need for more efficient processing, and edge compute is important to meet the demands of users. The demand is driven by training and last-mile bandwidth for inferencing, and the benefits of edge compute are discussed.
Full Transcript
- 0m10s Speaker 0
-
Hello, and welcome to the two hundred and thirty third episode of The Week With Roger, a conversation between analysts about all things telecom, media, and technology by Recon Analytics. I'm Don Kellogg, and with me as always is Roger Entner. How are doing, Roger?
- 0m22s Speaker 1
-
I'm great. How are you?
- 0m24s Speaker 0
-
Good. So Roger, this week, we're pleased to welcome Bill Long. Bill is the chief product and strategy officer for Zayo. Bill, welcome to the podcast.
- 0m32s Speaker 2
-
Hey, guys. Looking forward to the conversation.
- 0m35s Speaker 1
-
Glad you came on board. Let's start out, Bill, by giving our audience a brief overview of who Zayo is and what you do and what you're up to.
- 0m47s Speaker 2
-
Sure. So Zayo's a communications infrastructure provider focused in North America. We run a huge fiber network that goes all throughout The US and Canada, and hopefully one day soon into Mexico as well. And truly the big pipes to big customers, so that includes both a long haul network with high fiber strand count, as well as tons of mileage within a metro. So think of it everything from long haul network connecting hyperscale data centers, both between major metros and along those long haul routes.
- 1m17s Speaker 2
-
And then within a metro, connecting office buildings, cell towers, mobile switching centers, cable head ins, all the things that need to be connected within a metro and between metros. Obviously it starts at the infrastructure layer where we own the actual fibers in the ground, all the way up to layer two and layer three services, and then managed services as well for some of our enterprise customers. So that's what we do.
- 1m39s Speaker 1
-
It's everything from being a carrier's carrier to serving businesses and enterprises.
- 1m45s Speaker 2
-
That's exactly right. That's exactly right.
- 1m47s Speaker 1
-
So what would you say is the biggest trend? What's new in that market?
- 1m53s Speaker 2
-
Yeah, mean, think like all things everywhere, AI is the big trend. But sort of how that's manifesting with us, twelve to eighteen months ago, when we would get a long haul fiber order for 12 to 14 strands, that would be a large dark fiber order. And when I say strands, think of that as we have these large fiber optic cables or conduits that are under the ground that are full of fibers. When someone would order 12 to 14 fibers, that was a big order. About twelve to eighteen months ago, we noticed a trend where we were getting multiple requests for 144 or four thirty two fibers.
- 2m28s Speaker 2
-
So more than an order of magnitude increase in the quantity of fibers that folks were asking for. And so we started asking, what's driving that? And the answer was AI. And so we did a couple of big deals. We did a big deal last summer.
- 2m41s Speaker 2
-
It was over a billion dollars in total contract value. We've done more since then, and we've got a very large funnel. So that was the big sort of head scratcher, well, not a head scratcher, but the big trend of when we built these networks and we put the fiber in the ground, we were expecting consumption patterns in that 12 to 14 range. Now we're seeing an order of magnitude bigger. So if the demand is 10 times what you thought it would be, we started thinking about, hey, better get more fiber into the ground and better do it pretty soon.
- 3m9s Speaker 2
-
That's what we've been looking at.
- 3m10s Speaker 1
-
And so maybe you can translate it for our audience. What kind of a speed or bandwidth is a strand?
- 3m18s Speaker 2
-
It depends. A piece of fiber, the amount of capacity that it can carry is dependent on the electronics that you put on the end of that fiber. Sort of market ready now, we're selling a ton of 400 gig, but at the high end, we know how to do and sort of have 800 gig in the lab. So that's the kind of range of sort of market ready is between four hundred and eight hundred gig is the high end of what the market for a single wavelength. Now you can pile multiple wavelengths on a single fiber.
- 3m45s Speaker 2
-
So a single fiber can carry terabits of capacity on it. So the level of demand that we're looking at is just through
- 3m52s Speaker 1
-
the roof. Super with, right? Yeah. Yeah, that's why I wanted you to translate it into what 10 times more capacity means.
- 4m0s Speaker 2
-
Yeah, it's crazy. And so some of the math that we did, these builds that we're talking about doing, we recently announced we're doing pulling 5,000 new route miles. Each of those routes can take hundreds of millions of dollars to actually deploy. So we wanted to get really confident in the demand that was here. We wanted to make sure that this wasn't sort of a flash in the pan and that this demand was gonna dry up.
- 4m21s Speaker 2
-
And so to get more conviction on what was driving this demand, we took a step back and we actually looked at the best indicator we could look at for overall aggregate demand and where that demand was gonna be. We stepped all the way back and said, who has reserved the chip fabs at TSMC? And that gave us an indication of what types of chips, and they reserve on a five year basis. So we know what chips were gonna be generated by TSMC. That gives you an aggregate demand.
- 4m47s Speaker 2
-
And then we looked at, to try to figure out where those chips were going to be, it's where there's available power. And then we translated the amount of chips and required power, because we would know where the data centers are, into both training and inference and how much capacity for those different use cases would be required. And that helped us get the conviction that, hey, this demand is gonna be bigger than what we had initially thought, and it's gonna last for longer, using all the way down to looking at driven by the fundamentals of what's coming out of the chip fabs. That gave us a lot of conviction to invest in this space for the long term.
- 5m21s Speaker 1
-
We did, just in January, work for Seagate, which builds hard disks. And hard disk, what a lot of people don't realize, is what the cloud works on. SSDs is the consumer or it's the edge product that goes into the computer that's in somebody's premise, maybe a business or a consumer, but it's the hard disks that are driving the data, the data requirement in data centers, drives the cloud, which is the other side of the coin that you alluded to. And the amount of storage requirement that we could identify is just staggering.
- 6m1s Speaker 2
-
Yeah, it's through the roof.
- 6m3s Speaker 1
-
Yeah. So you're building more miles.
- 6m5s Speaker 2
-
Yep.
- 6m6s Speaker 1
-
Tell us a little bit more about this.
- 6m8s Speaker 2
-
Yeah, so one, I mean, as we're nerding out on what's driving the demand, you mentioned the storage devices. One other interesting aspect that we're looking at or another angle that we looked at how much demand this is gonna drive. So if you assume that sort of a neural processing takes amount of sort of watts per flop, And we think that over time, as chips in data centers get more and more efficient, the amount of bandwidth per watt is gonna increase dramatically. So if you look at crypto or Bitcoin as an example, when they started on CPUs, they were really inefficient. But as they went to FPGA and then to custom ASIC, there was a 10,000 time increase in the amount of compute per watt that was there.
- 6m51s Speaker 2
-
And we think that AI is gonna follow a pretty similar trend, that given a certain amount of power that a data center has today, the amount of neural processing that they'll be able to do for that unit of power is gonna go through the roof if it follows something similar to the crypto pattern. But if you think about like a set of neural processing, it still requires amount of bandwidth. So we think some of the estimates we have now could be at the low end as these chips and data centers get much more efficient.
- 7m17s Speaker 1
-
Yeah, and Jensen Huang from Nvidia just said yesterday after their earnings, he sees, like, the new models using a 100 times more compute power because of the difference in reasoning. Do you think that will drive then also the bandwidth requirement?
- 7m36s Speaker 2
-
I think one of the big unknowns, actually the Grok release was a good indication, but how much and how effective the use of synthetic data is going to be. So I think all indications are that synthetic data is gonna be a real thing, and it's gonna drive a ton of value and that the scaling laws will continue to increase on the back of synthetic data. So, yeah, I think all signs are pointing in the right direction.
- 7m56s Speaker 1
-
And I think a lot of the AI that we do today is actually text based. There's not that much video, either pictures or even AI movies coming. And that could be another quantum leap in data requirement and in throughput requirement.
- 8m15s Speaker 2
-
Yeah. And I think what that starts to do is it starts to drive bandwidth usage into other parts of the network that haven't really talked about. A lot lot of of the discussion we've been talking about thus far is all about training and sort of getting the models trained. I think when you talk about video, whether it's security cameras or even robots or any source of video, that's really gonna start putting pressure on the last mile bandwidth, which we're starting to see a lot of requests for design, but we haven't seen that big demand increase yet. But I think that's, over the next year or two, I think that's where we're really gonna see an order of magnitude change in demand is in the last mile for inferencing.
- 8m53s Speaker 1
-
So you're seeing a dramatic amount of more demand from a bandwidth perspective. Does that mean prices are coming down, or are you able to do what NVIDIA does and it gets more powerful, And even per flop, they are able to achieve higher rates.
- 9m13s Speaker 2
-
There's different ways to think about it. There's a price per meg or a price per fiber. Fiber is really the base infrastructure, and if you can get better and better at packing more megabits into each piece of fiber, your price per megabit actually goes down. But the price of the fiber recently has been stable and actually going up. So because it is sort of at the physical level, at the physical layer, the price of fibers has been going up, not down.
- 9m39s Speaker 2
-
But your price per bit has been, for forever, it has been But down.
- 9m46s Speaker 1
-
This is a win win, because I would imagine that you price things on a per strand basis, and the customer prices it on a per gigabit basis. So you both win. You get more revenue, and for them, it's cheaper. It gets cheaper on a per gigabit per second.
- 10m5s Speaker 2
-
That's right. And a lot of that's the same the story that's been happening in the in the telco space for, you know, twenty, thirty years, but it's just kind of on steroids now as the volumes are increasing so fast.
- 10m15s Speaker 1
-
So what's next? What's coming after this?
- 10m18s Speaker 2
-
Yeah. No. I think we hit a little bit of it. I think we're obviously seeing the big demand for training that's driving lots of long haul demand. We are seeing an increase in the need for connection to data centers that are proximate to the major metros.
- 10m32s Speaker 2
-
So think of connecting 30 to 50 megawatt data centers that are roughly about 50 miles from downtown metropolitan areas. We're seeing that demand is happening right now. And we're seeing a lot of last mile sort of planning for inference's impact on the last mile. But I think we're probably twelve to eighteen months away on that really blowing out.
- 10m51s Speaker 1
-
So the mythical edge compute is coming?
- 10m54s Speaker 2
-
Well, one person's edge is another person's core.
- 10m57s Speaker 1
-
Yeah.
- 10m58s Speaker 2
-
If you consider the Edge a major metropolitan area, so think of like 30 to 40 metros in The US and each of those metros needs five to 10 data centers, that's what I consider the edge, then yes. If you think of the Edge as like the base of a cell tower, I don't think that's ever gonna happen.
- 11m15s Speaker 1
-
It will be interesting. Lot of folks are like, when you listen to Nvidia with their, oh, is it RAN AI, that's like almost their vision that they're putting so much compute power there that they can put some storage on it too, and then they have a little data center.
- 11m32s Speaker 2
-
I've done some data center and bare metal businesses in the past, and it is a big challenge getting the right silicon at the right place in the right time. And so I think as an operator, what you really wanna optimize is you want to have as few POPs as possible that meet the requirements of your use case, because there's such advantages of scaled economics and operational simplicity. You know, fewer POPs is better to the extent that it can serve the use case. And there just have been no use cases that, you know, require that level of sort of deep edge deployment, if you will, to sort of warrant that additional cost and complexity.
- 12m9s Speaker 1
-
Well, really cool. Well, Bill, thank you for coming on the show and giving us fascinating insights into sales business.
- 12m19s Speaker 2
-
Great. Thanks for having me.
- 12m20s Speaker 0
-
Alright. Thanks, Roger. We'll talk next week.
- 12m22s Speaker 1
-
Thank you.