Questo forse spiega molte delle pubblicità che fanno vedere quanto è bello lavorare li.
Source : Vice
Secret Amazon Reports Expose the Company’s Surveillance of Labor and Environmental Groups
Dozens of leaked documents from Amazon’s Global Security Operations Center reveal the company’s reliance on Pinkerton operatives to spy on warehouse workers and the extensive monitoring of labor unions, environmental activists, and other social movements.
A trove of more than two dozen internal Amazon reports reveal in stark detail the company’s obsessive monitoring of organized labor and social and environmental movements in Europe, particularly during Amazon’s “peak season” between Black Friday and Christmas. The reports, obtained by Motherboard, were written in 2019 by Amazon intelligence analysts who work for the Global Security Operations Center, the company’s security division tasked with protecting Amazon employees, vendors, and assets at Amazon facilities around the world.
The documents show Amazon analysts closely monitor the labor and union-organizing activity of their workers throughout Europe, as well as environmentalist and social justice groups on Facebook and Instagram. They also reveal, and an Amazon spokesperson confirmed, that Amazon has hired Pinkerton operatives—from the notorious spy agency known for its union-busting activities—to gather intelligence on warehouse workers.
Internal emails sent to Amazon’s Global Security Operations Center obtained by Motherboard also reveal that all the division’s team members around the world receive updates on labor organizing activities at warehouses that include the exact date, time, location, the source who reported the action, the number of participants at an event (and in some cases a turnout rate of those expected to participate in a labor action), and a description of what happened, such as a “strike” or “the distribution of leaflets.” Other documents reveal that Amazon intelligence analysts keep close tabs on how many warehouse workers attend union meetings; specific worker dissatisfactions with warehouse conditions, such as excessive workloads; and cases of warehouse-worker theft, from a bottle of tequila to $15,000 worth of smart watches.
The documents offer an unprecedented look inside the internal security and surveillance apparatus of a company that has vigorously attempted to tamp down employee dissent and has previously been caught smearing employees who attempted to organize their colleagues. Amazon’s approach of dealing with its own workforce, labor unions, and social and environmental movements as a threat has grave implications for its workers’ privacy and ability to join labor unions and collectively bargain—and not only in Europe. It should also be concerning to both customers and workers in the United States and Canada, and around the world as the company expands into Turkey, Australia, Mexico, Brazil, and India.
Amazon intelligence analysts appear to gather information on labor organizing and social movements to prevent any disruptions to order fulfillment operations. The new intelligence reports obtained by Motherboard reveal in detail how Amazon uses social media to track environmental activism and social movements in Europe—including Greenpeace and Fridays For Future, environmental activist Greta Thunberg’s global climate strike movement—and perceives such groups as a threat to its operations. In 2019, Amazon monitored the Yellow Vests movement, also known as the gilet jaunes, a grassroots uprising for economic justice that spread across France—and solidarity movements in Vienna and protests against state repression in Iran.
Protesters from environmentalist groups including Extinction Rebellion, ANV-COP 21, Alternatiba, Attac block an Amazon depot in Saint Priest, near Lyon, France, on Black Friday 2019. (Nicolas Liponne/NurPhoto via Getty Images)
The stated purpose of one of these documents is to “highlight potential risks/hazards that may impact Amazon operations, in order to meet customer expectation.”
“Like any other responsible business, we maintain a level of security within our operations to help keep our employees, buildings, and inventory safe,” Lisa Levandowski, a spokesperson for Amazon told Motherboard. “That includes having an internal investigations team who work with law enforcement agencies as appropriate, and everything we do is in line with local laws and conducted with the full knowledge and support of local authorities. Any attempt to sensationalize these activities or suggest we’re doing something unusual or wrong is irresponsible and incorrect.”
Levandowski denied that Amazon hired on-the-ground operatives, and said that any claim that Amazon performs the described activities across its operations worldwide was “N/A.”
This may be one of the first examples of a changing game: Google was sanctioned because of its intermediation activity and as such, it was sanctioned where the customer of the promoted services is located.
Although the entity of the sanction is not disclosed and is likely to be rather small, it seems quite likely that they will appeal to the 2nd degree court. In Italy, wrt to the national regulation authority AGCOM, the appeal level is called TAR (Tribunale Amministrativo Regionale) and – if I’m not wrong – they have 60 days to file the appeal.
It seems to me that the provisions of the UE Regulation 2019/1150 are a solid legal base for sanctioning this type of behaviours, so I don’t see many chances that TAR will revert the decision.
It is a case that deserves to be followed carefully…
UPDATE: come qualcuno mi ha scritto nei commenti, effettivamente sovrastimavo i miei lettori (o meglio la percentuale di loro che fa skimming (legge il testo superficialmente) e non legge fino in fondo per capire che era un post sarcastico…
La situazione è veramente drammatica se una persona su 5 in Italia pensa che il virus sia fatto da Google ed Apple!
A costoro segnalo che non è nè iCovid nè Covid!
come sapete sono stato in parlamento nella scorsa legislatura eletto con la lista di Monti (che ho subito abbandonato).
adesso ho una notizia certa, da fonte affidabilissima e non posso proprio stare zitto
ecco cosa ho saputo:
- il virus tale non è, ma è solo un agente patogeno preparato secondo un piano dei poteri mondialisti per ridurre la popolazione e concentrare la ricchezza nelle mani di pochi ma la sua scarsa, per non dire nulla “efficacia” letale (sotto gli occhi di tutti, basta vedere le ambulanze costrette a girare vuote a sirene spiegate) ha costretto tutti ad un rapido cambiamento di programma
- l’agente quindi non è naturale ma effettivamente è stato realizzato in laboratorio in Cina, dove disponevano dei sistemi per bloccarlo, questo spiega perchè adesso l’emergenza in Cina e dintorni è già finita
- vista la reazione della Cina, Big Pharma ha tirato subito fuori i ‘vaccini’ che in realtà non sono altro che degli inertizzanti dei patogeni che erano stati diffusi (sappiamo bene che ci vogliono almeno 10 anni per fare un vaccino!)
- adesso il programma è preparare la prossima “pandemia” che sarà veramente letale, ma in modo mirato, personalizato sul DNA dei ‘bersagli’, grazie alla tecnica CRISPR
- per questo l’Organizzazione Mondiale della Sanità, notoriamente finanziata da Soros, Gates ed altri del Bilderberg, ha messo Monti a capo di una commissione per ripensare i sistemi sanitari futuri
- ed ecco la bomba: i governi stanno raccogliendo campioni di DNA della popolazione dalle goccioline di saliva che emettiamo nell’aria e si depositano sulle superfici; il piano è fare degli agenti patogeni che colpiranno solo persone della gente comune e non gli appartenenti alle elìte.
Fortunatamente esiste un modo semplice per proteggersi da questo piano criminale: Fino a quando non verranno distribuiti gli inertizzanti, mettiamo sempre una mascherina quando usciamo di casa e li freghiamo !
UPDATE: mi suggeriscono che anche lavare spesso le mani aiuta a limitare la quantità di materiale biologico che lasciamo sulle superfici da cui il governo ricava il nostro DNA. Un ultimo consiglio, stiamo alla larga dagli estranei, non possiamo essere certi che non siano agenti governativi mascherati.
(credits: Deutsche Bahn)
TL;DR I’m not arguing against contact tracing apps; I’m critical of Apple and Google’s approach.
In the past months our Iphones and Androids received operating systems updates, a joint effort by the two companies to help in the fight against COVID-19. The updates introduced new operating system functions for applications (APIs Application Programming Interfaces) to enable Contact Tracing apps developed by national health authorities. Without this update, Tracing Apps can’t work properly, mainly due to bluetooth power management by iphones.
When an iphone’s screen is off, it detects bluetooth beacons but does not emit them. It’s mute but not deaf. So, when an android meets a ‘mute’ Iphone, it cannot detect the contact. In a nutshell, Apple’s and Google’s APIs allow for a different management of the iphone’s Bluetooth to enable mutual detection of the contact also when the iphone’s screen is off.
The companies have decided that this functionality is reserved exclusively for States’ authorities.
This is the critical part: Apple and Google have agreed to provide States with a feature that bypasses the normal operations of the operating system and enables State Apps to do things the rest of developers are vetoed to.
Let’s forget the Covid emergency for a moment and reread the sentence above: functions on our devices that only State authorities can access.
Since this path has been taken, which reserved functions will States ask Google and Apple tomorrow ? Once this path has been taken, is it irrational to think that States will ask similar backdoors for surveillance purposes, obviously keeping them secret ?
Opening the source code of States’ apps won’t help much in inspecting what Apps really do. “Reproducible build” is a technical process to check if a source code corresponds with the Apps we download from the stores. Because of technical reasons, it is not possible to have an independent, byte-for-byte verification of all Apps using State reserved functions: The possibility of democratic oversight is undermined.
We welcome the effort of the duopolistic companies to make a contribution to public health. But this paradigm shift, making some functions reserved only to States’ authorities, is unacceptable. Reserving APIs to States is a paradigm echoing behaviors of totalitarian regimes, even worse of what George Orwell thought of.
We should welcome national health authorities who employ all available techniques in order to protect public health. The problem is not them using these APIs: they exist today, even if health authorities don’t use them.
The problem lies upstream; it lies in the concept itself that our devices, the homes of our digital dimension, provide services exclusively accessible to States’ authorities.
A striking difference with what William Pitt, 1st Earl of Chatham, who served as Prime Minister of Great Britain, said in 1763:
“The poorest man may in his cottage bid defiance to all the forces of the crown. It may be frail – its roof may shake – the wind may blow through it – the storm may enter – the rain may enter – but the King of England cannot enter.”
Source (Consciousness -> Meaning -> Symbols) -> Communication ->(Symbols -> Meaning -> Consciousness) Receiving
When a person communicates with another the process may be described as above:The receiver interprets the symbols she received, extracting a meaning that she internalizes with her consciousness. The source was conscious and had the intention of communicating and a meaning of what she wanted to communicate; she codified the meaning in symbols that have been communicated.
In this act of communication both source and receiving parties need to have some shared model to base the encoding and interpretation of the meaning; a shared context and culture (a common knowledge).
When a machine with an AI system generates a message (like in a chatbot) it generates it by algorithmically assemble symbols based on a statistical model. There is no consciousness, no meaning to form the intention of the communication.
The receiver would be wrong to assume that the machine intented to meant the information carried in the message. It’s a purely mechanical product.
It’s the receiving party that can attribute a meaning to the sybols, not the source.
Meaning is only in the eyes of the beholder.
Think of an abstract painter. She is concious and wants to commnicate something to the observers; she codifies the message in the painting but then eventually the observer looks at the painting and is unable to attribute a meaning; the observer just sees symbols and cannot extract a meaning.
This is symmetric to what happens with AI: the machine just mechanically assembles symbols and then the receiver attributes a meaning.
The machine does not know what the message means, it does not know it is producing a message; it does not even know that its output is a message and that someone will receive it.
With AI it’s like looking at rocks, stalagmites or clouds and seeing an animal.
The clouds don’t know it’s an animal; they don’t even know it’s a shape. They don’t know.
It’s just us, seeing the clouds who see an animal in them.
The important report is here.
It’s a very important and interesting reading.
A mini-synthesis of the findings in the 400 pages long report can be found in this twitter thread.
As many readers know, I have devoted a significant part of my public life to promote competition in the digital space (“dimensione immateriale”), have proposed comprehensive bills while I was in the parliament and wrote a book “Capitalismo Immateriale” (in english it’d be “Virtual Capitalism”, 5 stars rating in italy and a good number of reprints) explaining in great detail the business models and practices that this report has found applied by these big four.
These are few highlights:
In June 2019, the Committee on the Judiciary initiated a bipartisan investigation into the state of competition online, spearheaded by the Subcommittee on Antitrust, Commercial and Administrative Law. As part of a top-to-bottom review of the market, the Subcommittee examined the dominance of Amazon, Apple, Facebook, and Google, and their business practices to determine how their power affects our economy and our democracy. Additionally, the Subcommittee performed a review of existing antitrust laws, competition policies, and current enforcement levels to assess whether they are adequate to address market power and anticompetitive conduct in digital markets
To put it simply, companies that once were scrappy, underdog startups that challenged the status quo have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons. Although these firms have delivered clear benefits to society, the dominance of Amazon, Apple, Facebook, and Google has come at a price. These firms typically run the marketplace while also competing in it—a position that enables them to write one set of rules for others, while they play by another,or to engage in a form of their own private quasiregulation that is unaccountable to anyone but themselves. The effects of this significant and durable market power are costly. The Subcommittee’s series of hearings produced significant evidence that these firms wield their dominance in ways that erode entrepreneurship, degrade Americans’privacy online, and undermine the vibrancy of the free and diverse press. The result is less innovation, fewer choices for consumers, and a weakened democracy.
Nearly a centuryago, Supreme Court Justice Louis Brandeis wrote: “We must make our choice. We may have democracy, or we may have wealth concentrated in the hands of a few, but we cannot have both.”Those words speak to us with great urgency today. Although we do not expect that all of our Members will agree on every finding and recommendation identified in this Report, we firmly believe that the totality of the evidence produced during this investigation demonstrates the pressing need for legislative action and reform.
These firms have too much power, and that power must be reined in and subject to appropriate oversight and enforcement. Our economy and democracy are at stake.As a charter of economic liberty, the antitrust laws are the backbone of open and fair markets.
When confronted by powerful monopolies over the past century—be it the railroad tycoons and oil barons or Ma Bell and Microsoft—Congress has acted to ensure that no dominant firm captures and holds undue control over our economy or our democracy. We face similar challenges today.
Congress—not the courts, agencies, or private companies—enacted the antitrust laws, and Congress must lead the path forward to modernize them for the economy of today, as well as tomorrow. Our laws must be updated to ensure that our economy remains vibrant and open in the digital age.
Congress must also ensure that the antitrust agencies aggressively and fairly enforce the law. Over the course of the investigation, the Subcommittee uncovered evidence that the antitrust agencies failed, at key occasions, to stop monopolists from rolling up their competitors and failed to protect the American people from abuses of monopoly power. Forceful agency action is critical.
Lastly, Congress must revive its tradition of robust oversight overthe antitrust laws and increased market concentration in our economy. In prior Congresses, the Subcommittee routinely examined these concerns in accordance with its constitutional mandate to conduct oversight and perform its legislative duties. As a 1950 report from the then-named Subcommittee on the Study of Monopoly Power described its mandate: “It is the province of this subcommittee to investigate factors which tend to eliminate competition, strengthen monopolies, injure small business, or promote undue concentration of economic power; to ascertain the facts, and to make recommendations based on those findings.”
Now let me add a personal touch.
I started analyzing and understanding patterns echoing abusive behaviors when I had to close a startup I funded and founded more than a decade ago.
The dominant phone player at the time was european, the popular video format for small-screen handset was .3GP, UMTS was rolling out, smartphones where just born (but growing *fast*), the WII console was a hit and there were plenty of video sites.
My startup, based in the UK, developed a system that allowed a user to bookmark a video – any video, on any site – for later watching on any device the user owned (hiding all technical complexities such as format transcoding, synchronizing, etc.), allowing her to manage her playlists, share with a limited number of friends with a legal friendly “lending” mode, etc.
The smartphone app allowed for synchronization of the contents for offline viewing (there was not enough bandwidth for streaming and it was costly; there was no coverage in subways, trains or airplanes): when you arrived home/office, the app would recognize the wifi and start synchronizing all the videos to your device.
We even had a client for the WII so you could watch your playlists on TV (and on play stations as well).
We published the app on the store and downloads were ramping up very satisfactorily, and just by word of mouth.
There was a clear need of a service that allowed users to aggregate videos from one of the many possible sources(*) and watch them on any device, anywhere, online/offline.
(*) there was life beyond youtube, at the time.
Then our app was taken down from the store. And that happened time and again with motivations that seemed futile to me.
The app was clearly legit.
One day my CTO received a call which basically said that it was pointless for us to insist with publishing the app because they were supporting Youtube and didn’t want a competing app.
We discussed with the lawyers and for a while we unsuccessfully tried to get a written statement of what we’ve been told, to possibly start a legal action; Later, absent any proof of the facts above, I had no choice but to shut down the company.
I told my story to some friends and learned other similar stories; in one case I was told they were asked some million euros in order to let the app in the store.
I have no proofs, so I have been very careful to avoid references to a specific company.
this is a video of my system of the time.. (11 years ago)
Faccio mea culpa… Gli informatici come me spesso credono, in virtù della loro metodologia di approccio ai problemi, di sapere quale sia il processo migliore per la soluzionedel problema, di qualsiasi problema.
Quando poi le cose non hanno l’esito sperato, c’e’ sempre una colpa di qualcuno o un problema imprevisto ed inevitabile.
Con l’età poi si capisce che in genere i problemi avevano degli aspetti taciti, non formalizzati, ecc. che, se fossero stati presi nella giusta considerazione all’inizio, avrebbero mitigato la sicumera, che la funzione obiettivo che si cercava di indirizzare non era quella appropriata, ecc.
Opinion: AI For Good Is Often Bad
After speaking at an MIT conference on emerging AI technology earlier this year, I entered a lobby full of industry vendors and noticed an open doorway leading to tall grass and shrubbery recreating a slice of the African plains. I had stumbled onto TrailGuard AI, Intel’s flagship AI for Good project, which the chip company describes as an artificial intelligence solution to the crime of wildlife poaching. Walking through the faux flora and sounds of the savannah, I emerged in front of a digital screen displaying a choppy video of my trek. The AI system had detected my movements and captured digital photos of my face, framed by a rectangle with the label “poacher” highlighted in red.
Mark Latonero (@latonero) is a fellow at the Harvard Kennedy School’s Carr Center for Human Rights Policy and a research lead at Data & Society.
I was handed a printout with my blurry image next to a picture of an elephant, along with text explaining that the TrailGuard AI camera alerts rangers to capture poachers before one of the 35,000 elephants each year are killed. Despite these good intentions, I couldn’t help but wonder: What if this happened to me in the wild? Would local authorities come to arrest me now that I had been labeled a criminal? How would I prove my innocence against the AI? Was the false positive a result of a tool like facial recognition, notoriously bad with darker skin tones, or was it something else about me? Is everyone a poacher in the eyes of Intel’s computer vision?
Intel isn’t alone. Within the last few years, a number of tech companies, from Google to Huawei, have launched their own programs under the AI for Good banner. They deploy technologies like machine-learning algorithms to address critical issues like crime, poverty, hunger, and disease. In May, French president Emmanuel Macron invited about 60 leaders of AI-driven companies, like Facebook’s Mark Zuckerberg, to a Tech for Good Summit in Paris. The same month, the United Nations in Geneva hosted its third annual AI for Global Good Summit sponsored by XPrize. (Disclosure: I have spoken at it twice.) A recent McKinsey report on AI for Social Good provides an analysis of 160 current cases claiming to use AI to address the world’s most pressing and intractable problems.
While AI for good programs often warrant genuine excitement, they should also invite increased scrutiny. Good intentions are not enough when it comes to deploying AI for those in greatest need. In fact, the fanfare around these projects smacks of tech solutionism, which can mask root causes and the risks of experimenting with AI on vulnerable people without appropriate safeguards.
Tech companies that set out to develop a tool for the common good, not only their self-interest, soon face a dilemma: They lack the expertise in the intractable social and humanitarian issues facing much of the world. That’s why companies like Intel have partnered with National Geographic and the Leonardo DiCaprio Foundation on wildlife trafficking. And why Facebook partnered with the Red Cross to find missing people after disasters. IBM’s social-good program alone boasts 19 partnerships with NGOs and government agencies. Partnerships are smart. The last thing society needs is for engineers in enclaves like Silicon Valley to deploy AI tools for global problems they know little about.
The deeper issue is that no massive social problem can be reduced to the solution offered by the smartest corporate technologists partnering with the most venerable international organizations. When I reached out to the head of Intel’s AI for Good program for comment, I was told that the “poacher” label I received at the TrailGuard installation was in error—the public demonstration didn’t match the reality. The real AI system, Intel assured me, only detects humans or vehicles in the vicinity of endangered elephants and leaves it to the park rangers to identify them as poachers. Despite this nuance, the AI camera still won’t detect the likely causes of poaching: corruption, disregarding the rule of law, poverty, smuggling, and the recalcitrant demand for ivory. Those who still cling to technological solutionism are operating under the false assumption that because a company’s AI application might work in one narrow area, it will work on a broad political and social problem that has vexed society for ages.
Sometimes, a company’s pro-bono projects collide with their commercial interests. Earlier this year Palantir and the World Food Programme announced a $45M partnership to use data analytics to improve food delivery in humanitarian crises. A backlash quickly ensued, led by civil society organizations concerned over issues like data privacy and surveillance, which stem from Palantir’s contracts with the military. Despite Palantir’s project helping the humanitarian organization Mercy Corps aid refugees in Jordan, protesters and even some Palantir employees have demanded the company stop helping the Immigration and Customs Enforcement detain migrants and separate families at the US border.
Even when a company’s intentions seem coherent, the reality is that for many AI applications, the current state of the art is pretty bad when applied to global populations. Researchers have found that facial recognition software, in particular, is often biased against people of color, especially those who are women. This has led to calls for a global moratorium on facial recognition and cities like San Francisco to effectively ban it. AI systems built on limited training data create inaccurate predictive models that lead to unfair outcomes. AI for good projects often amount to pilot beta testing with unproven technologies. It’s unacceptable to experiment in the real world on vulnerable people, especially without their meaningful consent. And the AI field has yet to figure out who is culpable when these systems fail and people are hurt as a result.
This is not to say tech companies should not work to serve the common good. With AI poised to impact much of our lives, they have more of a responsibility to do so. To start, companies and their partners need to move from good intentions to accountable actions that mitigate risk. They should be transparent about both benefits and harms these AI tools may have in the long run. Their publicity around the tools should reflect the reality, not the hype. To Intel’s credit, the company promised to fix that demo to avoid future confusion. It should involve local people closest to the problem in the design process and conduct independent human rights assessments to determine if a project should move forward. Overall, companies should approach any complex global problem with the humility in knowing that an AI tool won’t solve it.
5G is fundamental for the economic development of the country. Yes, but why?
Because of its lower latency?
Because it will allow us to talk to the fridge and to know if the yogurt has expired?
To enable the circulation of self-driving cars or to allow remote surgical operations?
Let’s try to shed some light, let’s understand – beyond the hype – why the real “killer application” of the new mobile generation, for users, will probably not be any of those advertised so far.
5G, some technical elements
Let’s remember that the promise of the distinguishing features of 5G relates to a lower energy consumption for the same bits transmitted (which is ecologically commendable) coupled with higher “bandwidth” (throughput) with lower latency; all with greater capillarity and lower emissions.
To understand throughput with a simple analogy, think of a fireman’s hose and a straw: the throughput is the amount of water that comes out of the pipe per time unit. It depends on many factors such as the section of the hose, the water pressure and, last but not least, how much water per second the aqueduct is able to actually deliver to the hose.
Latency is a measure of how long it takes for the water molecule entering the pipe to exit on the other side.
4G – LTE-A
50Mbps e oltre
Table 1 – Throughput (” bandwidth “) of the various standards (indicative values)
(*) without congestion
Table 2 – Latency (milliseconds) of the various standards (indicative values)
It is often said that 5G will have a latency of 1ms. This is a limit, under laboratory conditions that will not be experienced in practice.
We must also ask ourselves “latency to get where?”. In order for our bit inserted in the network in Rome to get out in Sao Paulo, Brazil, even at the speed of light and with few switching devices in between, requires over 100ms (Sao Paulo is far away!). So actually the latency will go from aprox 150ms with 4G to about 120 ms with 5G. It may seem a small reduction, but it’s actually a big gain.
If we consider that most of the contents and services that we usually access are not located on the opposite side of the world but in a datacenter near to us, we can understand that when accessing blog.quintarelli.it the latency will be nearly halved. If then we put a myriad of servers, with replicated content, in each city, then the latency could fall even beyond the values indicated in the table.
Low latency, for which business reason ?
What type of business would need such a low level of latency to justify such a large and widespread server infrastructure? Certainly not streaming video – it is not dramatic to wait 30milliseconds more for a movie to start – but for example interactive games that require fast reaction times (the reaction times of sportsmen vary between 100ms of a world record-breaking 100meter and 250ms of a “normal” professional). So a professional gamer connected to a 5G access would have an advantage over other professionals using 4G, a difference that could earn him the first step of the podium. (Of course, not all of us are professional gamers).
All this, provided that the rest of the network is carefully dimensioned and has no other bottlenecks (congestion) other than the access segment. And this is rarely the case.
While I’m writing, a simple ping command (which provides an estimate of round-trip latency) to a server located at the heart of the Italian network, from my PC reports a minimum value of 46ms and maximum of 68ms, with a variability of about 50%. (long live the smartworking! With so many people normally in the office contending the available network capacity, often the variability is larger).
There is then a further element to consider, that is where to put the application: let’s think about a virtual reality or augmented reality helmet: when the wearer turns his head, in order not to feel a motion sickness, it is important that the sliding of the images seen by the eye accompanies the sliding sensation determined by the vestibular apparatus in our inner ear. Having a low latency can be very useful to mitigate this annoying sensation.
The closer the application is to the viewer, the better.
Bringing applications as close as possible to users is a paradigm that is called edge computing. Some hypothesize to deploy myriads of small datacenters close to the antennas. But what would be the application that requires such low latency, staying within the operator’s business perimeter, in order to justify these investments? (that is to justify a sufficient number of users that pay something more than the traditional price in order to benefit from this latency reduction?).
The question is not trivial, especially if you consider that the most extreme level of edge computing is to put the computer in the user’s home. If the application was delivered from a home console connected with a cable (or an ultrawideband wireless link) to the helmet, it would be (with a few exceptions) even better than if the application was delivered from a server near the network border (near the radio access portion of the network). It’d be even better if the application was directly in the helmet.
So what is 5G really for?
Some TV commercials promote the image of 5G-enabled drones delivering cups of “frappuccinos” to our door, children entertaining themselves with dinosaur holograms popping up in the garden, advanced telemedicine, so much so that a surgeon can perform a surgery while waiting for a wedding, cars driving alone in the city and spectators watching basketball games circling their point of view around the players on the field.
But are these the applications of 5G? Searching online you will find interesting ideas for less extreme use than TV advertising. I comment some of them:
- “5G will make life easier for robots, that is, the world called the Internet of Things. Things, from traffic lights to the fridge at home, always connected to the network. The goal of 5G is basically to propose in mobility all those essential features present in fixed networks, but with a latency even lower than cable connections”.
Traffic lights and refrigerators in mobility. But then are we sure that they need all this large bandwidth and that 20ms less are decisive to decide to switch from green to yellow or to tell us that the yogurt is running out?
- “…will also allow musicians, singers and artists who are in different places to interact in a single show”.
A reduced latency, accompanied by low-latency interfacing circuits of musical instruments, will probably allow people to play musical instruments together remotely. The task is not trivial and also depends to some extent on the type of music played, but this is a use case that seems likely, even if it is not yet the killer application of 5G.
- “This connection will be important in the medical field for the new connected ambulances that will be able to give a lot of information to the hospital even before the patient arrives. Planned uses are in telemedicine and rehabilitative robotics with the possibility to use IoT devices and remote control”.
Of course we are talking about all those ambulances that will have on board diagnostic imaging systems (CT, MRI) that must transmit many Gigabytes in a few seconds, yesssss….. because certainly to provide saturation informations and ECG requires just a few bytes and during the lockdown we all got used to make video conferences… Likewise for physiotherapeutic rehabilitation: clearly some believe that the electromechanical machines (that today are found only in specialized centers), if used at home, need much more than the 50Mbps that a current LTE can give and you can not do without a latency as low as hundred meter professional runner latency to be able to control the patient is bending a knee.
- “In the field of security and video surveillance will be used high resolution camera with 5G support, installable in stations and crowded places”.
4K video requires about 16Mbps, 8K video (7680 × 4320) will realistically require about 40-50Mbps. Nothing a decent landline access can’t support. Of course, you can’t watch it if you’re not at home.
- “It will be also usefulin tourism and journalism, the latter thanks to 5G will be able to provide more timely images and news in every part of the city”.
Which, as we know, is the core problem of journalism today…
- “5G also brings benefits to self-driving cars, as they will be able to communicate in real time with the road infrastructure and obtain important information for road safety and security”.
One wonders how autonomous cars are doing today. Do we really need that latency reduction to signal positions and obstacles? Even assuming that this is indeed the case, that a very reduced latency is absolutely necessary, then wouldn’t it make more sense to make direct connections between cars, without passing through a network that, however, adds some latency?
As you know, I am very skeptical about the concrete possibility of having cars autonomously whizzing through the cities because of safety problems related to an uneven and above all unmonitored environment. A differrent story are highways that are homogeneous and very well supervised environments. We will probably see autonomous driving on the highway, probably with reserved lanes and more than intervehicular communication we may need a lot of mobile bandwidth to be able to work or entertain ourselves during our travels.
- “Even smart homes will benefit from 5G, all objects in the home will be able to communicate with each other, receive information from the outside and be controlled remotely from a single device”.
What is it that can’t be done today with fixed network access and wifi? (hoping that the device that remotely controls our home belongs to us…) Are we sure that the complexity of Narrowband-IoT (NB IoT) and the 5G version will prevail over wifi and bluetooth? It seems to me that the probability that this will happen in the short to medium term is quite low, unless some black swan gives the necessary impulse.
- “An ultra-performing network, will be fundamental for the transition to the Internet of things, i.e. to ensure the development of applications and services for the Smart City based on sensors (for example for traffic control, waste collection, urban lighting, logistics)”.
In 20 milliseconds, many cars pass through street intersections and a lot of waste is thrown. If a lamp burns, then, we really know first!
- “The new, very fast 5G mobile networks will also help improve safety on the roads. In addition to allowing more data to be transferred in the same unit of time, they have a much lower latency and a reduced error rate: if one data packet in a thousand is “lost” with 4G, with 5G you get up to one in a million. It is these last two aspects that will make the electronic security systems of cars more effective, which will “dialogue” with each other in real time and with the certainty that the information arrives”.
About the latency I mentioned above; about the loss of packets, as people who deal with networks know, network protocols (TCP) have control mechanisms to ensure applications that all transmitted packets arrive. This aspect, therefore, is already insurable to current technologies.
- “The productive world will be revolutionized through the full digitalization of production facilities – the so-called Industry 4.0 – and the development of precision agriculture”.
Indeed in the business market there could be an interest in 5G for IIoT (Industrial IoT), unlike the residential market. The 5G will allow a higher communication density up to one sensor per square meter, that is one million sensors per square kilometer compared to the 50 thousand allowed by the best NB IoT technologies. We must also bear in mind that in the industrial field wired network technologies are very expensive (PROFINET cables and switches cost one order of magnitude more than the non-industrial equivalents) and introduce rigidity in installations that could be more flexibly realized and reconfigured using wireless connections. (examples 1, 2, 3, 4)
Be careful! I’m not saying that 5G doesn’t help, but that the “killer applications” that are proposed today (which often include “life saving” arguments, because on those you can’t squeeze investments), with the notable exception of the business market and IIoT, are generally trivial and almost certainly destined not to materialize.
Figure 1 – Piazza Maggi, Milan
This does not mean that a new network infrastructure should not be built, on the contrary! The first motivation for a new infrastructure is that it must precede the demand and enable it: the new infrastructure will serve a demand that is not there today. Einaudi, a very famous italian economist and politician, said that markets expresses demands, not needs, meaning they express immediate requirements and not long-term needs.
Also from this point of view the idea of pooling investments to co-invest in a 5G network infrastructure seems reasonable.
The networks they are a changin’
The “mobile” networks (which are not mobile: it is people who are mobile, not the networks) work by emitting signals that fade as you move away from the antenna, so that phones and antennas have to transmit with increasing power to be able to communicate, like two people talking to each other by moving away from each other. Or, another solution, is to place many more antennas that will be much closer to the user so that they will be able to communicate with lower power; the lower the power, the the larger the number of antennas that are needed. This is what has happened with mobile telephony: at every stage of evolution from GSM to UMTS (or 3G) to HSDPA (or 3.5G) to LTE (or 4G) emissions decrease and antenna density increases.
Consequently, the wireless access segment (the part from the user to the antenna) tends to become shorter: from the many kilometers of GSM (with low bandwidth) to a few tens of meters with Wi-Fi and 5G (which provide a lot of bandwidth).
With 5G, in Italy we will have hundreds of thousands of small antennas, with very low emissions (also because we have the lowest emission constraints in the world). In the future we will find them at the base of many buildings and they will provide us ubiquitously performances similar to Wi-Fi, wherever we go.
To each of these antennas we will need to feed some bandwidth with a network connection hookup (usually fixed). For this reason it is of obvious the importance of the presence of fixed networks, whether they are via the “old cable TV” as in some European countries or telephone networks with optical fibers. But it is good to overcome also this conceptual differentiation. The network is always and only one: it serves to carry data of any type. It does not matter if it was born for the telephone (copper pair) or for TV (cable networks) or if it is made with optical fibers.
Such a high density of antennas means that each antenna will be used by fewer people. If an antenna covers a radius of two kilometers, all users in a small town will connect to it. If an antenna covers a twenty meter radius, only the few people in that building will connect to it.
The transmission capacity available from an antenna is shared with the people who connect to it. In the case of the small town, it will be shared among many people; in the second case it will be shared among very few people so that each individual in the second scenario will have much more bandwidth available than their peers in the first scenario. Increasing the capillarity of the antennas helps to reduce emissions and increase the performance available to users.
Lower emissions thanks to 5G
It seems a counter-intuitive thing: lower electromagnetic emissions with more antennas; higher capacity with lower emissions. An example helps to clarify: imagine a room with forty people and two of them talking with a megaphone. The noise pollution will be very high and the overall capacity of the room will be just one conversation. If everyone whispers, the noise pollution will be minimal and the overall capacity will be twenty simultaneous conversations.
Ultimately, the network is not what we are used to think it is: a single object managed by an operator. Instead, it is a mix of different types of routes, with different technologies, much likey a road network.
From the point of view of the user, in the future there will not be a great distinction between fixed network and wireless network. The fixed network will have a very high capillarity and at its edge there will be an antenna. If located inside a house, it will still be a Wi-Fi managed independently by the most sophisticated users, if it will be located outside of the house it will instead be a 5G network managed by an operator.
A mental image of networks
Let’s picture circles to imagine service boundaries of an operator.
Today there is a circle that reaches the users’ home, the fixed network, to which a Wifi access point is connected; it can be right next to each side of the circumference, depending on whether it is provided and managed by the operator (inside the circle) or installed directly by the user (outside the circle). Outside this circle there are the user’s devices, from PCs to smart refrigerators (!?) to TVs connected to disks to store documents, photos and movies (and managing their complexity).
Then there is another circle, the “mobile network” one, which radius is smaller. It’s the network that powers the cellular radio stations located right inside the edge of the circle, inside the perimeter of the operator, to which users connect with their mobile devices.
Over time this second circle has expanded and with the 5G it will come close to the border of the first circle, so close that many users will find it more convenient to directly connect their devices, entrust the operator with the custody of their documents, photos and movies (without having to manage the complexity) reducing its management burden, often increasing the level of security and being able to have everywhere what would otherwise only be available at home / office.
These two circles, the fixed network and the mobile network have grown closer over time and will continue to do so with 5G; there is a large overlap in the surface area of the two circles. The more widespread the “fixed” network is developed, the more it will also contribute to the infrastructure for 5G.
The telephonist’s drama
Let’s go back to the use cases: many of those told, especially those that are presented as useful to save lives, suffer from what we could call “the telephonist’s drama”. I call it this way because i often perceive it when talking to many friends, very competent people, who work for telco operators who, being born as telephone operators, keep that imprinting in their DNA and seem to me they still are a bit conditioned by that way of thinking.
An example of a typical “telephone” way of thinking is that of value added services.
Let’s imagine a system to improve driving safety, based on a myriad of temperature and humidity sensors scattered along a mountain road so that a car that arrives that behind the blind curve can be informed there is a high risk of icing.
This is a typical use case in 5G narratives.
Let’s ask ourselves: did Vodafone or Telefonica put those sensors? If Vodafone put them, will they be accessible to cars connected to Telefonica’s network? And to those connected to Iliad’s ?
If we were in a monopoly situation, the problem would not arise: the same single operator puts the sensors and connects the cars.
But in a competitive situation, how can it be solved ? All mobile operators sign service interoperability agreements (and payment settling) whereby one operator gives the other access to the information of the sensors that he has placed in street X, a second operator gives access to the information for street Y to the first operator and then periodically they make a balance to see who owes the other how much money, after compensation.
You can bet that, while they are busy negotiating multiple bilateral service roaming agreements, someone will come along who will install sensors that will communicate via the Internet, taking away these possible services from the operators and making them over the top. And remunerating them with advertising, credit card or freemium models.
Does it recall something ?
In order to avoid a similar scenario, the operators should probably make a consortium and create a joint over the top service provider, presenting themselves on the market as a single integrated entity.
I doubt it will ever happen. Genetic imprinting is hard to die. Consider that in the 2020’s 5G WhitePaper issued by the NGMN Alliance (Alliance for Next Generation Mobile Networks, an association of operators, vendors, manufacturers and research institutes operating in the mobile phone industry) the word “Internet” appears only once (just once!), in the glossary, in the definition of “IIoT: Industrial Internet of things”.
What fate for 5G networks?
I have to make a major premise: the operators’ business cases to sustain the realization of widely spread 5G networks, with myriads of access points, are anything but obvious. It is likely that we will see a gradual roll-out, starting where the market can reward technology, i.e. in the business market, especially in industry.
As far as the residential market is concerned, the destiny of 5G use, for a growing user base, could lead to a reduction in the myriad of WiFi access points, now bought everywhere and by anyone, installed by anyone and uninspected for quantity and quality of their emissions. For many users 5G could greatly reduce their need of a fixed hookup to their homes, replacing the last meters of WiFi access with the last tens of meters of 5G.
TodayAs today, the way we use the network at home or on the move, are different: at home we have our files, backups, and our devices that (in general) we cannot access when we are on the move. A scenario like the one described could allow the members of a family to benefit from all their data in mobility, anywhere, as if they were at home. Perhaps this could be a killer app (that might be of interest to operators, though in a clash with OTT and device manufacturers who already provide these services, although with a looser billing relationship with the customer ).
However, there is a fundamental consideration that must be made: history teaches us that at home as at work, on the internet, people want to do more and more things in less and less time. So the traffic increases in a sensitive and inexorable way.
It is true that in nature there are no exponential curves that grow forever (sorry Mr. Kurzweil!), that all exponential curves sooner or later become logistic curves and flatten. There is certainly an asymptote of how much bandwidth a user can consume. My impression is that we are still very far from reaching it. This seems to be confirmed also by the recent actual usage data trends: the bandwidth used in Italy (and in the world at large) is still growing by ca. 50% per year with a comparable number of users.
As Alfonso Fuggetta said with a lucky expression, one cherry pulls the other, i.e. we start with simple things and then we make more and more, more and more complex, more and more requirements and more and more bandwidth. And at the same time our expectation of ever shorter reaction times increases.
Ultimately, for users, the killer app is still human impatience.
Il Ministero ha allestito 140 sale Centro Intercettazioni Telecomunicazioni (CIT) con rete dedicata e cablaggio e dotazione dedicata di Pc portatili. In ogni sala CIT è stato inoltre installato il server ministeriale e realizzato il software per la gestione dell’archivio digitale multimediale e per l’archivio documentale; 60 milioni di euro sono gli investimenti già spesi per le infrastrutture tecnologiche, per le opere murarie e per gli acquisti necessari; 700 i server e i rack dedicati alle sole intercettazioni; oltre 1100 i PC dedicati e destinati alle sale d’ascolto; circa 3.500 le persone coinvolte nella formazione specifica (personale amministrativo, magistrati e polizia giudiziaria).
English version below.
Questi sono dei passaporti sanitari italiani (“fedi di sanità”) in vigore durante la peste del 1722 (grazie wikipedia, ebay).
All’inizio della pandemia avevo letto una intervista ad un famosissimo virologo americano (che non ritrovo) in cui dava per scontato che ad un certo punto avremmo avuto dei braccialetti attestanti la nostra immunità.
La Electronic frontier foundation ha scritto un pezzo molto acuminato contro l’idea di passaporti di immunità digitali. In esso mischia considerazioni a base tecnologica con considerazioni a base medica. E’ chiaro che un prerequisito affinchè un simile passaporto abbia un senso è che i test medici siano accurati e che l’immunità sia duratura e prevedibile. Non credo francamente che la EFF sia nella posizione per affermare che questo non sarà il caso. Anche la questione etica in sè delle “fedi di sanità” e delle discriminazioni che ne potrebbero derivare, a ben vedere prescindono dalla questione tecnologia; non cambia se è un braccialetto o uno smartphone. Ne scrivo più sotto.
Le critiche legate all’aspetto tecnologico mi paiono molto legate alla situazione negli USA, non tanto a quella italiana:
- le persone si potrebbero abituare ad esibire “prove di stato” digitali che potrebbero riguardare altri aspetti delle persone che i gatekeeper considerano rilevanti – però mi pare che sia già così quando paghiamo la metropolitana con lo smartphone o esibiamo il biglietto per entrare al cinema o il titolo di viaggio in treno. Possiamo fare tutto in cartaceo ma l’utente può scegliere di farlo in digitale.
- tutte quelle informazioni potrebbero essere accumulate in un DB – si, ma solo se chi effettua quei controlli è lo stesso soggetto. nell’esempio precedente se il gestore della metropolitana, del cinema e della ferrovia appaltassero i sistemi di verifica allo stesso soggetto con possibilità di accedere ai dati (cosa limitata dal GDPR)
- essere obbligati a fornire il proprio telefono, sia bloccato che sbloccato, ad una autorità di controllo potrebbe essere una fonte di abusi – non so quante persone sappiano se e quando ci si può opporre a consegnare il proprio dispositivo alla polizia. In Italia ciò è possibile solo in caso di perquisizione, sequestro o arresto in flagranza. (quindi in linea generale non si può) E comunque non si è tenuti a consegnare i propri PIN/password di cifratura.
- immagazzinare informazioni mediche nei dispositivi potrebbe essere foriero di leak di dati – a ben vedere anche accumulare i dati nei sistemi degli ospedali e dei laboratori di analisi.
Mi limito ad osservare che è possibile fare dei sistemi sicuri, che non svelino informazioni se non su decisione dell’utente. Covidcreds è un sistema interessante che si basa sul paradigma della SSI-Self Sovereign Identity (la blockchain è una scelta implementativa con vantaggi e svantaggi).
Dal punto di vista delle considerazioni etiche, la cosa è forse meno ovvia di quanto potrebbe apparire; Wikipedia riporta due punti di vista opposti:
Human Rights Watch sostiene che, richiedere certificati di immunità per lavoro o viaggio, potrebbe costringere le persone a fare esami o rischiare di perdere il lavoro, creare un incentivo perverso per le persone a infettarsi intenzionalmente per acquisire certificati di immunità e rischiare di creare un mercato nero di certificati di immunità falsi o altrimenti falsificati. Limitando le attività sociali, civiche ed economiche, i passaporti di immunità possono “aggravare le disparità esistenti di genere, razza, etnia e nazionalità”.
D’altra parte, si sostiene che [in caso di restrizioni] sarebbe sproporzionato privare le persone immuni delle loro libertà fondamentali. Questo costituirebbe invece di fatto un caso di punizione collettiva. Di conseguenza, Govind Persad e Ezekiel J. Emanuel sottolineano che un passaporto di immunità seguirebbe il “principio dell'”alternativa meno restrittiva” e potrebbe anche andare a beneficio della società: l’aumento della sicurezza e dell’attività economica consentito dalle licenze di immunità andrebbe a beneficio di chi non ha la licenza. Ad esempio, l’assunzione preferenziale di persone immunitarie in case di cura o come operatori sanitari a domicilio potrebbe ridurre la diffusione del virus in quelle strutture e proteggere meglio le persone più vulnerabili a COVID-19. Gli amici, i parenti e il clero che sono immuni potrebbero visitare i pazienti negli ospedali e nelle case di cura.
Tutto questo, per commentare il fatto che, senza clamore, le fedi di sanità sono tornate tra noi.
Il ministro della Salute Roberto Speranza ha firmato un’ordinanza che prevede tamponi obbligatori per chiunque entri in Italia da Spagna, Grecia, Malta e Croazia o, in alternativa, un test sierologico negativo fatto non più di 72 ore prima dell’ingresso nel nostro Paese. Detto in altri termini: o fai il test quando arrivi (e stai chiuso in casa fino all’esito), oppure fai un test prima ed arrivi con una fede di immunità.
Non ci abbiamo fatto molto caso, probabilmente perchè si tratta di spostamenti internazionali. Ma se dovesse essere richiesto anche per poter andare da Milano ad Ancona ? O da Torino a San Siro ? E quando arriveranno i vaccini, vorremo limitare agli immuni di andare al cinema o allo stadio ? Cosa ne penseranno gli esercenti dei cinema davanti alla possibilità di riprendere le attività dopo oltre un anno di paralisi ? dovranno aspettare moti altri mesi sino a quando avremo raggiunto una immunità di gregge ?
Credo che tra pochi mesi (settimane) il tema diverrà di attualità.
These are Italian health passports (“fedi di sanità“) in force during the plague of 1722 (thanks wikipedia, ebay).
At the beginning of the pandemic I read an interview with a very famous American virologist (which I can’t find) in which he took it for granted that at some point we would have wristbands attesting our immunity.
The Electronic frontier foundation wrote a very sharp piece against the idea of digital immunity passports. In it, they mix technology based considerations with medical considerations. It is clear that a prerequisite for such a passport to make sense is that medical tests are accurate and that immunity is long-lasting and predictable. I frankly do not believe that EFF is in a position to say that this will not be the case. Also the ethical issue itself about the “fedi di sanità” and the discrimination that could result from them, in the end, does not change whether it is a wristband or a smartphone. I’ll write about it below.
The criticisms related to the technological aspect seem to me very much linked to the situation in the USA, not so much to the Italian one:
- People could get used to showing digital “proofs of status” that could be about other aspects of the people that gatekeepers consider relevant – but it seems to me that this is already the case when we pay for the subway with our smartphone or show the ticket to the cinema or the train ticket. We can do everything on paper but the user can choose to do it digitally.
- all that information could be accumulated in a DB – yes, but only if the company performing those checks is the same. In the example above if the company managing the data for the subway, cinema and railway would be the same, with access to the data (something limited by GDPR privacy regulation)
- being forced to provide your phone, whether locked or unlocked, to a law enforcement authority could be a source of abuse – I don’t know how many people know if and when you can object to handing over your device to the police. In Italy this is only possible in the event of a search, seizure or arrest in flagrante delicto. (so in general you cannot) And in any case you are not required to hand over your PINs/encryption passwords.
- storing medical information in devices could be a cause of data leak – but data storage in the systems of hospitals and testing laboratories could also.
I would just point out that it is possible to make secure systems that do not disclose information except after a decision by the user. Covidcreds is an interesting system based on the SSI-Self Sovereign Identity paradigm (the blockchain is an implementation choice with advantages and disadvantages).
From the point of view of ethical considerations, this is perhaps less obvious than it might appear; Wikipedia reports two opposite points of view:
Human Rights Watch argues that applying for immunity certificates for work or travel could force people to take tests or risk losing their jobs, create a perverse incentive for people to intentionally become infected to acquire immunity certificates, and risk creating a black market for fake or otherwise forged immunity certificates. By limiting social, civic and economic activities, immunity passports can “exacerbate existing disparities in gender, race, ethnicity and nationality”.
On the other hand, it is argued that [in case of restrictions] it would be disproportionate to deprive immune persons of their fundamental freedoms. This would in fact constitute a case of collective punishment. Consequently, Govind Persad and Ezekiel J. Emanuel stress that an immunity passport would follow the ‘less restrictive alternative’ principle and could also benefit society: the increased security and economic activity allowed by immunity licences would benefit the unlicensed. For example, the preferential employment of immune persons in nursing homes or as home healthcare workers could reduce the spread of the virus in those facilities and better protect those most vulnerable to COVID-19. Friends, relatives and clergy who are immune could visit patients in hospitals and nursing homes.
All the above, to comment on the fact that, without clamor, the fedi di sanità are effectively back among us.
The Italian Minister of Health, Roberto Speranza, has signed an ordinance that provides mandatory swabs for anyone entering Italy from Spain, Greece, Malta and Croatia or, alternatively, a negative serological test done no more than 72 hours before entering our country. In other words: either you take the test when you arrive (and stay indoors until the result), or you take a test beforehand and arrive with a fede di sanità.
We didn’t pay much attention to this, probably because it’s international travel. But what if you are also required a fede di sanità to go from Milan to Ancona ? Or from Turin to San Siro stadium ? And when the vaccines arrive, do we want to limit the immune to go to the cinema or the stadium ? What will the cinema owners think about the possibility of resuming activities after more than a year of paralysis ? will they have to wait for months more until we have achieved herd immunity ?
I believe that in a few months (weeks) the subject will become topical.
Circa quattro mesi fa avevo sentito la necessità di replicare ad un articolo di Franco Debenedetti dal titolo “Il caso TIM e il bastone tra le reti”. Questa era la replica.
Adesso colgo lo spunto offerto da un suo nuovo articolo sull’Huffington post per dire il mio punto di vista sulla vicenda.
|Beppe Grillo non è mai stato tenero con Telecom Italia: intorno al 2010 ne frequentava le assemblee; una sera Santoro mi invitò ad Annozero per discutere con lui di una questione di immobili che l’incumbent di telecomunicazioni stava vendendo. È quindi con una certa sorpresa che si è letto la sua accusa, anzi condanna, com’è nel suo stile, contro Open Fiber, il concorrente creatole da Renzi nel 2014.||Ricordo quando Beppe venne; ero alla stessa assemblea; c’era anche Sergio Cusani che aveva ricevuto un mandato dalla CGIL per fare un riesame dei bilanci Telecom post-privatizzazione.|
|Val la pena ricordare la ragione a suo tempo addotta, ragione strategica, ovviamente. Enel aveva un piano di sostituzione di contatori “intelligenti”. Già che si deve entrare negli immobili o financo negli appartamenti, questa la brillante idea, si può anche portare una fibra ottica, una “sinergia” (altra parola magica) che avrebbe consentito di risparmiare soldi e tempi, sinergia rafforzata dal fatto che Enel possiede i cavidotti ove infilare la fibra. Erano favole e chi se ne intende lo diceva già da allora.||Perché le sinergie tra reti elettriche e in fibra sono una favola? Una rete è per lo più infrastrutture civili, scavi, tubi, pozzetti…
In Irlanda l’ESB (Electricity Supply Board, l’azienda irlandese ex monopolista elettrico e controllata dallo stato) ha fatto una joint venture con Vodafone Ireland per fare FTTH usando in gran parte l’infrastruttura elettrica di ESB. Lo dice chiaramente: https://siro.ie/about-us/
Non è un esempio isolato. Ce ne sono altre (perlopiù locali) ovunque in giro per il mondo, dalla Svezia agli USA.
In Italia una parte delle sinergie citate, proprio la sostituzione dei contatori, si sono perse per azioni “regolamentari”.
La stessa Telecom aveva sfruttato infrastrutture elettriche; ciò avviene, ed è regolamentato.
|Questo, dicevano, avrebbe portato la fibra direttamente dentro casa, FTTH, e non solo nel cabinet FTTC, cosa che pure consente di velocità in download superiori ai 100 Mbit/s, che per anni ancora basterà ai servizi d’uso domestico.||Ancora? Come avevo spiegato anche nella mia replica precedente, il vectoring non è usabile in situazioni di monopolio con le regole attuali europee che prevedono la concorrenza infrastrutturale.
A parte ciò, c’è il problema che il xDSL/FTTC ha un rapporto di 10:1 tra download e upload. Come si è visto durante il periodo di lockdown, una normale famiglia con un paio di figli, senza fibra non ce la fa a far lavorare i genitori e studiare la prole contemporaneamenteperché è l’uso dell’upload è simmetrico rispetto al download.
|Enel fonda Oper Fiber, poi diventata joint venture paritetica con Cdp. Il Governo identifica le aree bianche (a cosiddetto fallimento di mercato)…||Le aree bianche non le ha identificate il Governo ma le hanno dichiarate Telecom Italia e gli altri operatori telefonici perché sono – per definizione – le aree in cui nessun operatore privato ha deciso e comunicato di investire.
Per capire la portata del problema, bisogna pensare che i comuni italiani sono oggi 7.904. Quindi erano aree bianche il 97% circa dei comuni italiani. Questa era IL problema: nel 97% dei comuni glli operatori non avevano piani di investimento!
Ricordo che insieme a un centinaio di top manager e accademici avevamo da poco comprato una pagina sul Corriere della Sera a nostre spese proprio per chiedere una strategia per la digitalizzazione del paese.
|…trova i fondi da investire, vengono fatti tre bandi, che Open Fiber si aggiudica con offerte stracciate (tanto ha i cavidotti…):
con un finanziamento al 100% di oltre un miliardo e mezzo di euro Open Fiber si impegna a effettuare in tre anni (cioè entro
il 2020) il collegamento in fibra ottica a poco meno di 8 milioni di Unità Immobiliari (dette U.I. over 100) in 7632 comuni.
|Mica tanto stracciate se è vero come è vero che il ribasso sui costi di gara, dai dati pubblici, è stato:
Il fatto che la scadenza del 2020 non sia stata rispettata è vero ma non è responsabilità imputabile a OF ma a chi ha attuato pratiche dilatorie sanzionate dall’AGCM.
La domanda vera è: stante che la prima dichiarazione di una imminente costruzione di una rete di accesso in fibra, se non ricordo male la fece Riccardo Ruggiero appena insediato AD di Telecom nel 2001, prima del 2019 quante unità immobiliari business/residenziali sono state passate?
|A marzo 2020 sul sito del Ministero compaiono degli aggiornamenti: i Comuni sono solo più 6230 (-18,4%) e le unità immobiliari da poco meno di 8 diventano poco più di 6 milioni. Lo scrive l’on.le Vincenza Bruno Bossio (Pd) nella sua interrogazione al ministro dello Sviluppo economico.
Dal sito del Mise si ricava che entro il 2020 saranno rese disponibili le infrastrutture solo nel 16% dei Comuni indicati ella gara. A oggi su 2914 cantieri aperti, 875 risultano ”pronti al collaudo”, di cui collaudati 606.
|Non mi ritrovo con questi numeri, ma questo dipenderà da una situazione che evolve con il passare del tempo.
A parte ciò, ho già ricordato che l’AGCM ha sanzionato TI per pratiche dilatorie, e poi bisogna anche ricordare che, una volta pronte, il collaudo non lo fa OF e non dipende dall’azienda…
|Come se non bastasse, l’FTTH, l’unico tipo di collegamento che l’Autorità per le Comunicazioni consente di essere
pubblicizzato col nome di “vera fibra”, nella gran maggioranza dei casi arriva non nell’edificio (cosa per cui lo Stato ha pagato) ma a 20 – 40 metri della stessa, sovente su un palo della luce, talchè bisognerebbe più correttamente chiamarlo FTTP, Fiber to the Pole.
|Beh, se non erro il bando chiedeva proprio questo: fermarsi entro i 40 metri dall’edificio proprio per non sprecare risorse. Quando il primo utente richiede di essere collegato, la fibra viene portata fino a casa, evitando di sprecare inutilmente soldi per case che magari non chiederanno mai un collegamento FTTH.
È FTTP quando la fibra resta fuori dell’edificio per entrare in casa con altri mezzi (in alcuni posti nel mondo si entra in radio o rame) ma in Italia la fibra arriva a casa quando il cliente la attiva, quindi è FTTH.
|A Tim, nel corso di un vivace scontro col ministro dello Sviluppo Economico, è stato interdetto di fare dei collegamenti nelle aree che Open Fiber si era aggiudicate.||Non è preciso; a Tim è stato interdetto di andare a cablare zone che essa stessa aveva dichiarato che non avrebbe cablato: “TIM: Non vado nella zona x” – “OF: OK, allora ci vado io” – “TIM, no, allora ci vado io”. Se lo fai reiteratamente, a me viene qualche idea sul motivo perché lo fai, no?|
|Il risultato è che Open Fiber, che avrebbe dovuto colmare il digital divide, con il suo ritardo di fatto è di ostacolo a
|Le case collegate da Open Fiber sono circa 8,5 milioni e l’Italia ha fatto finalmente un salto in avanti nell’indice DESI che, l’anno scorso, la vedeva ancora penultima per copertura FTTH.|
|Per fortuna che ci sono le aziende private che con la tecnologia wireless sono riuscite a sopperire al grande aumento di domanda dovuto al lockdown.||Non è corretto. Il Wireless è sempre stato previsto nelle gare per la banda ultralarga ed è particolarmente indicato per luoghi remoti e case sparse, laddove non è economicamente sostenibile, nemmeno con incentivi.
Tra le performance delle reti wireless e le performance delle reti fisse ci sono sempre circa 2 ordini di grandezza di differenza (il fisso è sempre più performante).
|Ma Beppe Grillo non si ferma qui: ha anche quella che per lui sarebbe una pars construens, fare un’unica società, ovviamente in mano pubblica, e, en passant, rinazionalizzare Tim. Eppure è stato grazie alla concorrenza di Open Fiber, secondo il Governo, che Tim ha varato (e sta implementando) il suo piano di copertura con fibra.||Il problema non è la rete unica o lo status quo, ma quale soluzione porta più rapidamente, efficientemente e sostenibilmente la fibra nelle case degli italiani. TIM non dice che non è giusto sostituire il rame con la fibra. Dice che lo vuole fare con maggiore gradualità. (Dalla citata epoca di Ruggiero)
Nel frattempo, l’obiettivo della Commissione UE non è più avere reti ultrabroadband ma, giustamente, una Gigabit Society.
|Grillo poi non sembra soverchiamente preoccupato della contraddizione tra accusare l’azienda pubblica di avere aumentato i tempi e ridotto gli interventi, e voler mettere tutto – mobile, 5G, fibra – in un’unica società
|Immaginiamo il perimetro dell’operatore come un cerchio.
Entra nelle case, sulla circonferenza c’è il router, fuori dalla circonferenza c’è l’access point che l’utente si compra e si gestisce.
Con il 5G il cerchio resta fuori dalla casa e l’antennina è sulla circonferenza, nel perimetro dell’operatore.
Nelle more che si diffondono la definizione ultra-alta per il video e tecniche olografiche, per molte persone ciò sarà sufficiente potendo disdire il filo che entra in casa e spegnere il proprio access point (con il vantaggio che puo’ usare i device ovunque e non solo attaccato all’access point di casa).
Non mi pare insensato pensare di ricomprendere porzioni di rete wireless.
|Tim ha dichiarato di essere disposta a societarizzare la rete, e di fonderla con Open Fiber, a condizione di mantenerne il controllo: condizione per lei esistenziale, senza rete Tim si riduce a un insieme di negozi.||Ma cosa è la rete nell’era delle SDN? (Software Defined Networks)
Eppure lo stesso De Benedetti scriveva nel suo articolo precedente “Sembrava fosse così anche per l’elettricità, ma l’avvento delle smart city e la diffusione delle auto elettriche richiederanno il massiccio uso di tecnologie digitali che faranno dell’elettricità qualcosa di sempre più altro dal monopolio naturale”.
(Elettricità dove notoriamente c’è un gestore unico della rete ed una sola rete elettrica che non ha mai avuto senso duplicare).
|In questo senso quella che vorrebbe essere la pars contruens del piano di Grillo, ne è invece la pars destruens, di una delle poche grandi aziende del Paese.||Prima di discutere la soluzione finanziaria migliore, occorre chiarirsi su quale sia l’obiettivo di un intervento pubblico: mettere i soldi solo per averne un controllo (o minoranza di blocco; simile ILVA o Alitalia) o per determinare le condizioni per lo sviluppo di una infrastruttura strategica per il Paese? E in questo caso, quale è la rete
che vogliamo ed entro quando?
La situazione è in un’oggettiva impasse. Nel frattempo che si decide, Open Fiber non può smettere di cablare e a Telecom Italia aspettare fa male.
Open Fiber non può smettere perché, seppur in larga misura per cause esogene, è in ritardo e deve recuperare, altrimenti rischia di dover pagare penali. A TIM aspettare fa male perché man mano che la rete di Open Fiber si estende, la sua rete in rame (che è sempre stata citata dalla stampa come asset di ultima garanzia del debito) si deprezza.
Questo grafico mostra (in Milioni di euro) l’andamento dei ricavi e dei margini (EBITDA) del gruppo Telecom negli ultimi 11 anni in Italia e nel Resto del mondo (ROW). Va detto che il perimetro è cambiato, ovvero sono stati venduti degli asset per ridurre il debito che, come si vede, è calato. Si vede anche che il contributo dal Resto del mondo negli ultimi anni è sostanzialmente stabile.
Quest’altro grafico mostra l’andamento solo per l’Italia di ricavi, margini debito e numero di linee vendute a clienti al dettaglio (non fornite all’ingrosso ad operatori concorrenti; sulla scala di destra milioni di linee)
Premettendo che magari sbaglio e vi invito pertanto a verificare i numeri sul sito di Telecom, a giudicare da questo andamento, se dipendesse da me, anche io cercherei di andare a matrimonio con una giovinetta che promette di essere maggiormente a prova di futuro.
Il fatto è che questa evoluzione non è un fulmine a ciel sereno e IMHO l’operazione andava fatta molti anni fa. Non è avvenuto anche perché il capitale è passato di mano n volte, ciascuna volta per pochi anni. Se l’azionista avesse avuto un respiro di più lungo termine probabilmente non si sarebbe opposto alla separazione servizio/rete, anzi.
Secondo me, una rimonopolizzazione, mettendo assieme TIM ed Open Fiber, difficilmente potrebbe essere autorizzata dall’autorità Antitrust (Europea) senza che ci sia una separazione tra servizi e rete, tra operatore wholesale ed operatore retail. C’è da scommettere che più di un concorrente vi ricorrerebbe opponendosi ad un monopolio integrato e mettere d’accordo tutti gli operatori per evitare un ricorso mi pare francamente poco probabile, a questo punto.
Anche si optasse per una separazione wholesale/retail, una operazione del genere, comunque, non è fattibile nell’ordine dei mesi.
Rispetto ad un intervento statale, alla fine, torniamo alla domanda: per fare cosa ? Per motivazioni finanziarie o per dotare il paese di una infrastruttura strategica ? Quale ? Quale obiettivo infrastrutturale avrebbe il neo monopolista pubblico ? E con quali altri azionisti allinearsi per assicurarsi di tradurre l’interesse del Paese in obiettivi aziendali ? Per proteggersi da takeover ostili basta la Golden Share. Per decidere quale grande piano infrastrutturale fare devono essere d’accordo gli azionisti.
Ma quale è il grande piano infrastrutturale che giustifica tutto questo ? Questa è la domanda da cui partire, IMHO.