If you missed our analyst call on Wednesday with Roger Entner, Peter Rysavy and Avi Greengart you can listen in now! Topics discussed included 5G network deployment, the future of smartphones in a 5G world, cloud computing, use cases for artificial intelligence, and more!

What to Listen For:

“Even though the opportunity to connect to 5G today is limited – it’s amazing that we can connect to 5G at all. Because when we started working on the standards we weren’t expecting any deployment until 2020. So we’re actually a year ahead of schedule which is remarkable for the complexity.” – Peter Rysavy, Rysavy Research

On the benefit of advancements in AI and AR technology: “If you’re trying to wire up an airplane, having a heads up display where it can show you how to wire up the airplane in real time, with overlays of what you’re seeing and what you should be seeing…the return on investment is crazy high. It’s so high in fact, that in that particular use case, Boeing and Airbus are willing to develop these systems in-house, building their own custom software, in some cases building their own custom hardware.” – Avi Greengart, Techsponential

“Another topic that’s going to be really interesting is the whole convergence issue of telecommunications with content…70% of wireless usage is video, and so video becomes more and more important and some of the more obscure things that nobody paid attention to will become much more prevalent. For example, the STELAR re-authorization.” – Roger Entner, Recon Analytics

Have Questions? Head to Twitter and Chat With Us:

Host Roger Entner: @RogerEntner
Peter Rysavy: @peter_rysavy
Avi Greengart: @greengart

President Trump’s re-election campaign recently spun the wireless industry into turmoil again. Kayleigh McEnany, national press secretary for President Trump’s 2020 reelection campaign, told Politico that “A 5G wholesale market would drive down costs and provide access to millions of Americans who are currently underserved. This is in line with President Trump’s agenda to benefit all Americans, regardless of geography.”

This new message comes after months of private comments by Trump 2020 campaign manager Brad Parscale and adviser Newt Gingrich that promoted a different plan for 5G, one that would have the federal government seize 5G spectrum from the Pentagon and give it to a private company to manage and lease to other, private companies for use. Enter Rivada, a company headed by Declan Ganley and backed by political heavyweight Karl Rove and venture capitalist Peter Thiel.

As the rumors about Rivada being “given” commandeered spectrum to manage a 5G network swirled, shares in Intelsat took a dive from $23.55 on Friday, March 1, 2019 to $17.76 on Tuesday, March 5, a 25% drop in just two trading days, shaving off $700 million in market cap. This reflects how concerned Wall Street is about the specter of a government-sponsored national 5G wholesale network.

Robert Spalding, a retired brigadier general and author of the now-infamous National Security Council memo suggesting the government create a government-controlled national 5G network, has not backed off his idea despite the swift and negative reactions the idea has spawned. Spalding reiterated in an interview discussing the potential Rivada 5G wholesale network that he still saw merits in a nationwide wholesale network. This was followed by an opinion piece by Kevin Werbach in The New York Times who advocated for a national wholesale network, arguing that it would magically reduce consumers’ mobile bills.

Oddly, Werbach advocates breaking the “wireless oligopoly” by replacing it with a monopoly. Mr Werbach concedes that there have been notable failures of government-controlled communications networks like Australia’s open access fiber network, but suggests that in the U.S. it will be a success with “careful oversight and a long-term commitment by the government.” This is a dog whistle for economic regulation at best and nationalization at worst.

This entire debate is happening against the backdrop of robust and methodical 5G build-out in the U.S. American mobile operators have already started building 5G networks, having engaged in the standards-setting process to make it a reality for years. Some operators are already up and running and in dozens of cities we will have mmWave 5G networks in only a few months. By the end of the year, wireless operators will use some of their existing wireless spectrum for 5G as well, which means we’ll have nationwide 5G networks by the end of the year, with larger and larger ultra-high speed areas in even more places. How can a state-sponsored 5G wholesale provider catch up to this reality when they are at least two years behind in building out the infrastructure, and their prospective customers all have their own networks?

When I take a step back and look at the entire hoopla, it appears to me that it is a solution that is looking for a problem. Rivada is looking for a business and its investors are looking for a payoff. In 2016, Rivada applied to become the wholesale broadband provider for Mexico’s Red Compartida, but was disqualified from bidding after failing to post a bid bond. In 2017, Rivada tried to become the nationwide FirstNet provider but lost out to a competing bid from an existing carrier. In both cases, Rivada sued the respective governments without success.

Each state in the United States had the choice to opt in or out of using FirstNet or pursue their own first responder network. Initially, New Hampshire chose Rivada to run its first responder network, but with hours left to make the decision, New Hampshire changed its mind and chose FirstNet. The push for a nationwide 5G wholesale network is Rivada’s third (or fourth) try to convince a government to give it spectrum for free. You can’t blame Rivada for a lack of trying, but just because Rivada is trying to make a buck does not mean its idea makes business sense, or is something the U.S. government should adopt at the expense of depriving existing 5G efforts of more spectrum.

Instead, like it has in the past, the U.S. should invest in its winning strategy of clearing more spectrum for commercial deployment, expedite the siting of 5G infrastructure, and allocate spectrum based on sound analysis of which companies are best positioned to use the critical input as efficiently and quickly as possible.

More spectrum! is the rallying cry among participants in the global race to be the first country to deploy 5G networks. Generally speaking, deploying more spectrum into a network means faster speeds and more capacity to handle consumer demands. But not all spectrum is created equal.

Low frequency spectrum, below 1 GHz, typically travels further and more easily penetrates walls and buildings better than higher frequency spectrum, above 1 GHz. Mid-band frequencies, between 1 GHz and 2.1 GHz, radio signals travel about two thirds as far frequencies below 1 GHz and have a harder time penetrating structures. While people can use their wireless device in the home, reception in inside rooms or the basement could be more difficult. High frequency spectrum – between 2.1 and 6 GHz – has a hard time penetrating walls. And spectrum above 6 GHz, called mmWave (millimeter Wave) spectrum, does not travel far and has an even harder time penetrating structures.

Licenses to use spectrum are typically awarded by the Federal Communications Commission based on defined geographic areas and for specific amounts of spectrum. Engineers typically refer to the specific spectrum licenses as “channels” or “carriers”.

Regulators can opt to allocate spectrum for large or small geographic areas, e.g., nationwide versus an economic area, and for bigger or more narrow channels, e.g., a 5 MHz channel versus a 20 MHz channel. Licenses covering smaller geographic areas and for smaller amounts of spectrum are attractive to smaller providers who are willing to sacrifice peak data speeds for coverage. The question is what do regulators want? More bidders in a spectrum auction that provide slower speeds or fewer bidders that can then offer faster speeds?

Spectrum in the United States is highly fragmented and therefore does not yield the technically optimal speeds that could be achieved across the various spectrum bands described above. The FCC has licensed 22 5×5, three 6×6, four 10×10, one 11×11, three 15×15 MHz and one 20×20 MHz licenses. The optimal channel size for 4G, a technology we started to implement in 2011 is 20×20 MHz. Prior FCCs have unfortunately allocated spectrum in ways that make it impossible to provide the fastest possible speeds in the US. There is one exception to this trend – the FCC’s decision to allocate MSS satellite spectrum in a 20×20 MHz channel.

Sprint is probably best positioned to leverage high band spectrum for maximum speed due to its high band spectrum holdings. The 2.5 GHz band has 194 MHz that is dynamically shared for upload and download and is owned by one operator, Sprint. Where it is available Sprint utilizes three 20 MHz channels combined for ultra-fast downloads, the big caveat here is “where it is available.” Sprint is still lagging behind in the availability of high speed 4G internet and the limited range of 2.5 GHz is hampering its availability in rural America. Hopefully the FCC will allocate the mmWave spectrum (24 GHz and higher at this time even though technically mmWave starts at 30 GHz), where a multiple amount of spectrum of what has been licensed today for wireless is available, in 100 MHz or larger channels. Remember, the wider the license/channel, the faster the speed.

European regulators are increasingly allocating their spectrum in larger channels and in some countries the auction process aggregates the licenses won into larger channels to maximize the benefit of having larger channels. One example is in Switzerland which uses a process called Combinatorial Clock Auction (CCA) to achieve maximum channel sizes. Due to that Switzerland is regularly among the five fastest countries for mobile internet. The FCC should consider CCA in its upcoming high band and mmWv band auctions to maximize the opportunities for carriers to obtain wide channels. This will also help the agency prevent individual bidders from buying discrete, small channels solely to prevent competitors from creating wide, contiguous channels. DISH Network and its bidding partners Northstar Wireless and SNR Wireless did exactly this when it strategically purchased licenses to create fragmented channels in 2015 during auction 97. This increases the value of the blocking licenses if DISH and its bidding partners want to sell the license again.

New smartphones have the capabilities to use several channels at the same time, creating faster downloads by essentially gluing different pieces of spectrum together. In late 2016, the first smartphones with 3 Carrier Aggregation, or the ability to use three license/carrier simultaneously came to market. For example, the iPhone 7 and the Samsung Galaxy S7/S8 could use 3 Carrier Aggregation, the iPhone 8 could use 4 Carrier Aggregation. The new Samsung Galaxy S9 can use up to 7 Carrier Aggregation.

The disadvantage of carrier aggregation is that it uses more of the device’s battery power by having multiple radios (transmitters/receivers) active at the same time and because the phone has to periodically check if the different spectrum bands are actually available, which in turn reduces battery life. While having the ability to aggregate more channels is welcome, the less often the band-aid technology is needed the better. The better method is to create larger channels in the first place and if larger channels are aggregated the resulting speed is even higher with less impact on the battery life of the phone.

In a nutshell, the FCC needs to put a framework together that facilitates the creation of the largest possible channels to create the fastest possible mobile internet that incidentally will also have the least impact of the battery life of smartphones. Having to aggregate several small channels puts the US at a significant comparative disadvantage vis-à-vis other countries allocating spectrum for much wider channels. The FCC should either allocate spectrum in the high and mmWave bands in larger blocks across larger geographic areas, or employ a Combinatorial Clock Auction system that gives both the benefits of having potentially more license winners with the advantage of having the largest possible channel sizes for superior speed. Only then will the US have a chance of keeping pace with other countries and maybe even having the world’s fastest mobile internet.

The IAB just published its 2016 advertising revenue figures. It was a banner year with record setting revenues of $72.5 billion, 22% higher than last year. This makes digital advertising the number one advertising medium in the United States as TV advertising according to eMarketer came in at $71.3 billion. Seventy-five years after the first TV commercial and 25 years after television became the largest advertising medium, a new king of advertising has been crowned. We cannot underestimate the significance of this event. Advertisers demanding efficiency and effectiveness measurements have voted with their wallets to make digital advertising the biggest spend category. But even within digital advertising, we are seeing major shifts away from “traditional” desktop and fixed digital advertising towards mobile advertising. Growing from basically nothing ten years ago, mobile advertising is now a $36.6 billion segment and represents 51% of digital advertising

Ad Format in millions
2015 Revenue
2015 Share
2016 Revenue
2016 Share
Growth
Growth percentage
Search
$20,481
34.4%
$17,756
24.5%
($2,725)
(13.3%)
Classified & Directory
$2,757
4.6%
$2,345
3.2%
($412)
(14.9%)
Lead Generation
$1,756
2.9%
$1,989
2.7%
$233
13.2%
Mobile
$20,677
34.7%
$36,641
50.5%
$15,964
77.2%
Display
$13,881
23.3%
$13,790
$19%
($91)
(0.6%)
Total
$59,552
 
$72,521
 
$12,969
21.7%

Source: IAB 2017
What is hidden beneath the numbers is that mobile advertising is taking more than the entire growth of digital advertising. Advertising is clearly going where Americans are spending more and more of their time and where most of the data traffic is being consumed. Advertising is also moving into the segments dominated by Google and Facebook, which are dominating mobile advertising to a much greater extent than the fixed internet.
As we can see below, the combined share of Google and Facebook increased from 2015 to 2016 from 67.4% to 71.2% as the mobile strategy of both companies paid off. Google and Facebook, the two largest players, represented a combined 89% of the entire growth of the digital advertising market. The Herfindahl-Hirschman Index (HHI), a commonly used metric to measure market concentration, is well north of 3,000. Markets with an HHI of 2,500 or higher indicate high concentration. If the current trend continues, Facebook will have a higher market share by the end of 2017 for digital advertising than all remaining competitors, excluding Google, combined.

Ad Revenues in millions
2015
2015 Share
2016
2016 Share
Growth
Share of Growth
Google
$31,300
52.5%
$37,600
51.8%
$6,300
49%
Facebook
$8,900
14.9%
$14,100
19.4%
$5,100
40%
Everyone Else
$19,400
32.5%
$20,800
28.6%
$1,400
11%
 
$59,600
 
$72,500
 
$12,900
 

Source: Digital Content Next
Just as a comparison, another other big internet company, Netflix, had $5.1 billion is US sales in 2016 – the same amount by which Facebook grew in just one year – from 2015 to 2016. This demonstrates how significant advertising-driven digital businesses are in the TMT sector. Google and Facebook capture 89% of all growth in digital advertising, which impressively counters the Google narrative that “competition is only a click away.” While this might be true in theory; in reality when a company grows as fast as all other competitors combined, the narrative sounds hollow and self-serving. This is especially true when the only competitor that is half way keeping pace with Google – Facebook – does not rely on search engine recommendations for its traffic.
Even more concerning is the loss of diversity of sources of advertising revenues. In 2007, Google generated just over 60% of its advertising revenue from its own sites, showing a reasonably healthy advertising ecosystem when one considers that most of its advertising revenues came mostly from search. Ten years later, Google derives more than 80% of its advertising revenues from its own websites and services.

Global Revenue in Millions
2014
2015
2016
Google Properties
$45,085
$52,357
$63,785
Google Network Members
$14,539
$15,033
$15,598
Total
$59,624
$67,390
$79,383
Google Properties Percentage
75.6%
77.7%
80.4%

Source: Alphabet
The increasing percentage of Google’s revenue being derived by its own properties combined with very lackluster growth for non-Google properties raises questions of potential search engine bias or a precipitous decline in the ability of Google Network members to monetize their sites. Either way, it is not a healthy trend
As they have become increasingly successful and dominant, Google and Facebook have been able to increase their revenue per user. Google has increased ARPU by about a third, whereas Facebook has been extremely successful by more than doubling ARPU in the US and Canada within 24 months.

US/Canada ARPU
2014
2015
2016
Google
$9.59
$10.68
$12.76
Facebook
$9.00
$13.70
$19.28

Source: Facebook, Alphabet
Especially Facebook, which now considers all of its customers to be wireless users, has increased its ARPU into the range of a traditional prepaid wireless subscriber. We should consider these significant monetization advances by Facebook and Google combined with their wireless overlap. Verizon is planning to emulate this with its new Oath business unit – the combined AOL and Yahoo properties. Verizon will have the same reach as Google or Facebook with a programmatic advertising engine. The fundamentals are in place for Verizon to become a serious competitor to Google and Facebook. It all comes down to execution now.

The FCC is seeking to more closely regulate a key tactic in mobile carrier marketing—their performance and speed claims.

The commission already does this for fixed broadband and has proposed to use crowd data to set the upper limit for carrier marketing claims.

But here’s the problem: There are significant differences between crowd and scientific testing.

Crowd testing is easier to conduct but tough to draw out any useful conclusions, while scientific testing takes significant resources to conduct but provides easy-to-understand and useful results based on a methodical process that is accurate and enables apples-to-apples comparisons. As a result, the FCC, in taking a shortcut with crowd testing, will not present the full or fair picture of the performance and speed of mobile providers.

Although the differences between crowd and scientific testing could just be chalked up merely to competition, with both sides advocating their approach, a major government agency has decided to throw its lot in with a crowd tester. Such an approach will provide a limited view of the mobile consumer experience and won’t provide an accurate reading of the service providers’ strengths and weaknesses.

In this report, we provide an overview of both scientific and crowd testing and provide a number of observations on the right policy direction.

Download the Report Now

 

 

AT&T is launching an aggressive new offer providing unlimited voice, text and data to customers who have AT&T wireless and DirecTV service. The service is competitively priced at $100 for the first line, $40 for the second and third line, and free for the fourth line. Immediately available to 4.5 million AT&T wireless customers who also have DirecTV as well as new customers for a combined offer, the immediate and most logical target segment is the 21 million AT&T wireless customers who do not have DirecTV and the 15 million DirecTV customers who do not have AT&T wireless service, who are offered significant incentives to bundle. The unlimited data offer allows consumers to watch their DirecTV service or any other video service they have access to on the go as if they are at home in the video quality of their choice. The larger the screen they watch mobile video on, the more significant the noticeable visual difference.

AT&T is able to provide such an offer as it set itself up a lot more broadly than its competitors: nobody else has wireless, TV, fixed internet in 21 states, as well as home automation. It’s competitors are increasingly single or dual point solutions that cannot match AT&T’s full product portfolio. For AT&T, the current state is the result of deliberate strategic choice made by Randall Stephenson and his senior team to provide customers a comprehensive, integrated solution from one provider rather than to ask them to create a solution based on individual services from different providers that the consumer has to integrate themselves. AT&T as an operator is following the same basic strategy as Apple and Microsoft who believe that an integrated user experience designed with the same vision will create a better user experience and has a better chance of creating product and service combinations that are more useful and satisfying than one-off stand-alone product and services. The difference is AT&T is approaching it from the network and connectivity starting point, whereas Apple and Microsoft begin from the OS and device point. The clearer the vision, the further the reach, the better integrated, the greater is the impact on customer’s lives. As the strategy is taking shape in the form of integrated offers, the results depend on execution, satisfying the latent demand by customers who have an increasingly mobile lifestyle.

It will be interesting to see how competitors will respond to this offer. Competition has driven the competitors into uneven product match-ups as management decisions and circumstances created differentiated competitors who have limited ways to respond to such an offer.
When AT&T closed the DirecTV deal, Sprint offered DirecTV customers a year of free Sprint service. Who knows what Sprint will offer this time? One year? Two years?
The cable companies have a similar content line up they lack the partner to provide a similar offer as they partnered with Verizon Wireless with the option to launch an MVNO. While Comcast has activated their MVNO option three years after they close their deal with Verizon Wireless, it might take a while longer for Comcast to launch it. T-Mobile, based on the Binge On offer would be a more fitting partner for cable, would be ecstatic if the cable providers would come hat in hand looking for a closer partnership. Can you imagine the tweet storm coming from John Legere?
Verizon could probably match AT&T in its FiOS footprint in the Northeast, but that in itself would make the situation Verizon is in more apparent than it is already – being an increasingly pure play wireless service provider with a limited TV footprint. Its answer to the increasingly converging telecom and media world, Go90, is considerably narrower in terms of content and more focused on just one demographic segment than AT&T’s video offer, which also limits the upside.
The closest in terms of unlimited is T-Mobile with its BingeOn offer. While it is a “bring your own video” offer and keeps video quality at DVD levels, its strength and weakness come with the power and restrictions of the associated video service and the associated screen. While it is understandable from an economic and network loading perspective, BingeOn is a tradeoff. The larger the screen and the greater the resolution of video stream, the more visible the quality restriction becomes for consumers, but ultimately consumers should be able to make the choice what is right for them.

AT&T is able to offer an unlimited data plan again as is bringing more spectrum online most noticeably in the WCS band, the expectation of more spectrum becoming available through the auction process, the re-farming of 2G spectrum that is significantly less spectrally efficient than 4G LTE, as well as redesigned its data network to be a video first network. The focus on DirecTV/AT&T wireless customers as well as this being a limited time offer will lessen the impact on the network. When the load on the network becomes too large, AT&T can simply conclude the open-ended promotion.

Earlier this week the International Telecommunication Union (ITU) reported the results of its annual global survey. Mobile broadband connections (47.2% of the global population) have now overtaken households with internet access (46.2%) not to mention fixed broadband connections (10.2%) In terms of the ITU’s Internet Development Index, the US remains among the 15 most developed countries. While some alarmist consider that a poor showing, they do not take into account the differences in size, both population and geography, as countries such as Hong Kong (31 square miles and is part of China) as well as Iceland (200,000 inhabitants) and Luxemburg (543,000 and smaller than Rhode Island) are ahead of the US. The US improved in the rankings again this year. Can the US do better with an investment-friendly policy frame work? Yes, it can.

Later in the week, the Center of Disease Control (CDC) just released an update to their recurring wireless substitution report documenting the shift away from landlines to mobiles. Now you may ask why the CDC does a wireless study: The answer is easy. As part of its National Health Survey, it added a question if respondents have wireless phones, landline phones or both and combined with all the other demographic information it collects, the survey has become the definitive source for  tracking cord-cutting trends.

For the first six months of 2015, 47.4% of the respondents said they had a wireless phone but no landline anymore, i.e.; people we consider cord cutters. For families with children, this number has risen to 55.3%. Initially, cord cutting was accelerated due to the 2005 introduction of a wireless option to the Lifeline program, but in recent years the general population has caught up. While in 2012, only 30.7% of the non-poor Americans had cut the cord, this has increased to 45.7% in 2015, whereas for poor Americans the cord cutters only increased from 51.8% to 59.3%. Employed Americans have increased their cord cutting from 38.4% to 52.7%, whereas unemployed American cord cutting is a lot lower, increasing from 23.6% to 32.7% in the same time frame.

The biggest difference in the adoption of cord cutting depends on where Americans live and their age. Americans in the Northeast are substantially less likely to have cut the cord (31.6%) than Americans in other parts (47.1% to 51.9%.) This does not mean that Northeasterners are immune to cord cutting as the number of cord cutters increased by 50% in the last three years, but is more due to their historically slower start. Not surprisingly, Americans over 65 have the lowest adoption of cord cutting, but never the less almost doubled the cord cutting over the last three years from 10.5% to 19.3%. Never the less, this is still substantially less than the next lowest cord cutting adoption age segment of 45-65 with an increase of 25.8% to 40.8% over the last three years.

As wireless substitution becomes the norm, in-building wireless usage becomes critical as the mobile phone is increasingly becoming the only communications device Americans rely on. The onus lies on local planning boards to do their part to enable wireless network infrastructure to be built quickly. The FCC has introduced a 60-day and 150-day shot clock for collocated or small sites and new cell sites respectively. Local municipalities must now react more quickly on requests for permission to build new cell locations so that Americans can reliably use their mobile phones while at home, just as they do while en route, whether it’s to  check their sports scores on a Sunday afternoon, calling or texting their friends, or to call 911 in a case of emergency.

Five years after the FCC called for data on the state of the special access marketplace from just a portion of the providers offering special access, the agency appears poised to modify contracts and embark on a new round of rate regulation based on market data from 2010 to 2012. This would not be a concern if in fact the market for special access services had stagnated in 2012 with prices and providers remaining constant; however, that is not the case. Why should we care if the FCC premises a new set of pricing regulations on outdated information? Let me explain.

Special access refers to the dedicated data connections that physically connect a business, an office park, a government building, a cell site, or other man-made structure to the larger public switched telephone network and the Internet. The number and kinds of companies offering special success services has increased substantially in the last few years. When Sprint issued RFPs for backhaul related to its Network Vision program, more than 20 providers responded. The pricing was so competitive – so low – that even at the most hard-to-reach sites where competitive pressure should be the smallest, at least one potential bidder decided not to submit a bid because they were unable to provide the service profitably.

There is much debate about the reality of the market for special access and market data is hard to get, but not impossible. Zayo is the largest stand-alone backhaul and special access providers in the country (it’s one of T-Mobile’s backhaul providers), and provides very timely pricing trend data. In its Q4 2015 Pricing Trends document, which Zayo publishes with its earnings release, we can follow the prices Zayo is charging in the market place. Traditional DS1 revenues have declined from $1,147 per DS1 (also known as T1) to $783 per DS1. For DS3s (also known as T3) revenues have declined in the same time period from $4,081 to $3,269. Just to put this in context, a DS3 has 28-times the throughput of a DS1, for roughly four times the cost.

As part of the wireless industry’s drive to stay ahead of consumers’ appetite for high-capacity data services, building more backhaul has been essential. As Sprint has embarked on its Network Vision program, it has also revamped its backhaul provisioning. Gone are the days when Sprint was predominantly reliant on its direct competitors to provide backhaul; it now has a stable of thirty to forty alternative access providers, in addition to its own wireless backhaul in the 2.5 GHz range. T-Mobile has also been no slouch; almost all of its backhaul is now Ethernet fiber, which is part of the reason why its download speeds are so fast. The cost savings for both companies are substantial. In its Q4 2014 financial results, T-Mobile USA’s quarter over quarter cost of service was down 7.1% partially due to renegotiated backhaul contracts.

In addition to having a multiplicity of providers from which to obtain their special access lines, wireless companies continue to experiment with other solutions that can improve network performance and reduce cost. For example, companies are experimenting with self-backhaul where access and backhaul are part of the same system. This solution is intriguing but not currently viable in today’s spectrum environment due to the amount of spectrum needed to make it a reality. In the spectrum-constrained environment that characterizes the mobile market in the U.S., T-Mobile, AT&T and Verizon Wireless are all in the same boat. Because wireless self-backhaul could provide more options for carriers, this should be an additional reason for the government to redouble its spectrum clearing goals and aim for at least one gigahertz of spectrum for the wireless industry below 6 GHz.

It’s important to note that DS1 and DS3 lines are legacy products that are at the end of their technological life. As the industry has moved on to Ethernet connections, the number of DS3s sold by Zayo has declined from 3,569 in September 2013 to 2,772 in June 2015, a 23% drop in less than two years. For DS1s, that decline has been even more pronounced – from 3,569 to 2,772, a 38% decline from September 2013 to June 2015. It is perfectly understandable that the lack of new demand for DS1s and DS3s makes some providers hesitant about issuing new long-term contracts, as it would obligate them for many years to divert time and money to support a dying market. If we take Zayo’s data and project out the current decline rate then they will have stopped selling DS1s in three and a half years and DS3s in less than seven years. But these projections are deceiving, and likely too conservative, as declines are accelerating as the DS1/DS3 technology becomes increasingly obsolete.

Zayo’s data shows a massive shift to Ethernet connections, which are both faster and cheaper than DS1/DS3, and where the marketplace is essentially even as new entrants and incumbents are building capacity at the same time. Zayo’s data shows a steady increase in demand, as well as falling prices per unit. The way Zayo represents the data – grouped in 10 to 100 MB, 101 to 1000 MB, and above 1GB – shows that customers are buying larger and larger pipes for lower prices per unit, which is consistent with the commonly observed conditions in the marketplace. The market is so competitive that further industry consolidation among backhaul providers seems inevitable because so many competitors are barely, if at all, breaking even today.

Into this competitive dynamic steps the FCC which seems to want to set prices. But nobody, including the FCC, knows how low special access prices can continue to fall with means the agency runs the real risk of setting rates at an artificially high level. In international markets where regulators set termination rates, everyone charges the government-set rate regardless of the actual cost, especially when the cost is lower than the government mandated rate. In the United States, where operators can freely negotiate call termination rates, per minute prices are $0.0007. In countries where the regulator sets the rate, the rate is higher than in the U.S. and Canada , as a 2012 OECD report shows. In the lowest-cost government controlled market, France, the termination rate was almost 20 times higher than in the U.S., with $0.0139, going up to $0.0878 in markets like Estonia, where the government-set rate is 125 times the rate U.S. operators negotiated with each other. Canadian operators that can also set prices freely went even further than their U.S. peers and eliminated termination rates entirely.

 

If the FCC were to set a price level –even if it is meant to be a price ceiling – the market would take it as a benchmark for what it should charge for special access charges. The shake out would continue with the weaker competitors selling to the lower-cost providers, but at a lower competitive intensity level and higher prices than if the FCC would have not intervened. The winners would be the special access providers that are able to offer service above the cost of the most competitive players, but below the government-set rate. It’s a classic rent-seeking scenario where marginal players ask for government intervention to safeguard their survival and increase their profits at the expense of end customers. The losers would be the end customers: businesses that have to pay the government-mandated rate when competition would have driven down prices below what the FCC deemed appropriate.

The impact of FCC intervention would be analogous to rent control in the housing market, which Economics Nobel Prize winner Gunnar Myrdal called “the worst example of poor planning by governments lacking courage and vision.”

They say a picture is worth a thousand words. If that’s the case, then the Twitter header image for Microsoft CEO Satya Nadella demonstrates that perfectly. Just look at Nadella’s tortured smile then try to make sense of the picture in the header. It resembles some kind of hellish, hopelessly complex landscape that maybe someone at Microsoft understands and loves. But, for a company that wants to solve problems, it’s the wrong way to start. Nonetheless, it does provide the perfect illustration of what is and isn’t happening at Microsoft.

 

Nadella Twitter July 2015

The decimation of its handset business is just the latest symptom of the fundamental lack of clarity when it comes to Microsoft’s role in the world. For a company that says it’s “mobile first” the 7,800 layoffs are a striking admission of the utter failure of its mobile strategy. This moves essentially shuts the door on all but a few remaining Microsoft employees in Finland.

It is a classical “the emperor has no clothes” moment.

Nokia’s fate was sealed when mobile devices started doing more than “connect people.” With its life on the line, the organization could not find a new reason to exist in the significantly changed—and still changing—world. Microsoft’s fate will be similarly sealed if it cannot provide a clear vision and an elegant implementation of its vision of how consumers will use technology—from “mobile first” over nomadic laptops and stationary desktops.

As the consumer reemerges as the focal point of technological innovation, Microsoft seems to be hopelessly stuck in an antiquated, we-do-it-this-way-so-like-it-or-lump-it, corporate-centric approach.

During briefings, when I asked Microsoft how it plans to differentiate its products and services from Apple’s and those of the Android ecosphere, company representatives consistently replied unflinchingly with, “We make consumers more productive.” I was taken aback. “No, seriously, how will you differentiate?” I followed up. The reply? “Seriously, we’ll lead with making the consumer more productive.” I remain flabbergasted by the wide disconnect between how consumers think, what they want, and what Microsoft plans to force for them—especially from a company that surveys the living daylight out of consumers. How many consumers have ever woken up in the morning and declared they want to be 2.3% more productive today through the use of Microsoft products and services. You might sell a CIO on that, but definitely not a consumer. The lack of vision and understanding of what “mobile first” actually means beyond the tag line that an 8-year old could recite at a school play will turn off CIOs even more so.

Looking at it in hindsight, the handset group never had a chance as a full portfolio device manufacturer. The lack of a clear and concise vision at Nokia was replaced with an empty shell around the “mobile first” term. Mobile devices produced by the handset group have been very good devices—competitive with or even superior to devices that have significantly outsold them.

The lack of success for Microsoft in mobile is not because the division didn’t know how to make excellent devices. Rather, it comes from the lack of a compelling, holistic value proposition. Why would someone buy into the Microsoft ecosphere when they have so many choices? The company’s lack of a value proposition is glaringly apparent in the most competitive and newest segment (which, of course, cares the least about incumbency power): Mobile. The retrenchment into a core device team that creates fewer phones, but is hampered by a lack of corporate focus, will merely reduce the mobile price tag of a poorly defined overall corporate strategy.

Microsoft needs to realize that this “mobile first” world requires that its pace of innovation and attention to detail accelerate to mobile speeds company-wide.

That means it must produce new releases annually. Poor product releases like Windows 8—the equivalent of panicky software jambalaya, packed with reactionary knee-jerk features and devoid of attention to detail—cause a staggering amount of damage.

It would be okay for Microsoft to have a conceptual and executional meltdown once a decade. But Microsoft manages to do this with every other release of Microsoft Windows. Windows 10 looks like a good release. But let’s have a look back: Windows 8 was just plain bad, Windows 7 was good, Vista was abominable, Windows XP was good, and consumers responded to Windows ME with a resounding “not me!”

History repeats itself if you look further back. To add insult to injury, the average upgrade cycle is 30 months, which means that unless you are forced to use a bad OS because it’s the only one that comes with your new computer, or if you just can’t take it anymore and switch, you have to wait 5 years for an innovative step forward. Why not just save a lot of money and aggravation and just skip over other release and pour the resources into the successful update? No wonder Apple has taken so much market share from Microsoft with this two steps forward one step back product release cycle. The only saving grace for Microsoft is its huge imbedded base and the lack of a serious competitor in the business market.

But if the only reason why people purchase your product is that they have always purchased it and there is no viable alternative, one shouldn’t be surprised if there a competitor emerges. An initial stream businesses are already migrating toward Apple and if Apple shows some love and care, the stream will turn into a raging torrent. As Apple is on its path to integrate the user experience across hardware platforms, its success in mobile is expanding its beachhead in laptops and desktops, where it is continuously increasing market share, despite offering only computers $899 and above.

The most pertinent lesson is close to home: It took only a few years for Nokia to go from 50% global market share to 2%. Now Microsoft’s handset group faces the unenviable task of explaining to other handset manufacturers why they should build more Windows Phone devices, even as Microsoft pulls back in spectacular fashion.

Microsoft will fail if it continues to be a confused conglomeration of businesses units with terrible track records getting their products to work together. It would be a good start if all of their products and services would come together and work seamlessly across hardware platforms. That very fact would make the lives of their custome