So today VP Vestager and Commissioner Breton introduced the drafts of the new EU digital regulation.
A very nice piece of work, I must say.
There are some very fine grained measures and more general principles. Among the latters I think that the regulatory ladder, interoperability and device neutrality are particularly worth of consideration. Also the Digital Services Coordinator is a good general principle: a nice idea to improve effectiveness of the regulation itself.
Let’s think of future amendments…
Auctioneers implement an number of strategies to push auction prices up.
Not only the auction mechanism itself like the second price auction, which is well known and understood, but also other more sophisticated strategies like some form of “semantic expansion” which automatically expands the size of the bidding base, or granting to non profits some “money equivalent” to spend in their auctions, or suggesting the bidding price, or altoghether completely managing the bidding price for the bidder (behaving like they are not the auctioneer and the party selling the product, but a mere technically “neutral” part).
Suppose you have a shop, you want to sell shoes online and the bid winning price for your ad is 10c.
The auctioneer goes to the non profit X that gives away shoes to the poor and give them a 10K grant as a “credit” which can be spent on their own adverts (which, by the way, have a zero marginal cost).
They don’t give a contribution in kind (“exposure of the non profit X advert”) which would be accounted for 0USD, given it has a zero marginal cost.
Instead, the non profit X will bid for the “shoe” keyword on the search engine using this “chinese money” (it’s not real money as it cannot be cashed out for fiat dollars) thereby pushing up the price “real bidders” (like your shoe shop) have to pay to win the auctions to place their adverts on search results. (Would that contribution to the non profit also account for a tax break ? I don’t know).
Another example of “non linear behaviour” in auction could be done with rebates. Suppose you have a B&B close to Hoover Dam that you promote via an auction. You would bid for “B&B Hoover Dam” and would compete against online travel agencies (OTA) that bids for the same keywords.
Suppose the auction winning price is 30c per clic. If you win it, then you’d have to pay the 30c. But OTAs are among the highest spenders of digital ads. We don’t know if, at the end of the year, the auctioneer gives the OTA a rebate thereby effectively lowering their true cost to win that clic for the B&B at the Hoover Dam.
It might well be that, in the end, thanks to rebates, the OTA ends up really paying 25c for their “30c bid win”; and with these 25c they win over you, when you have offered up to 29c.
We don’t really know if this kind of things happens. But we once run an experiment for accomodations close to Milan pushing the limits of bid offerings to unreasonable levels. A popular OTA won the bids at a price higher than the cost of the rooms themselves. It is indeed possible that the OTA decided they’d loose money on that transaction, but given that bids run automatically based on algorithms configurations, it is also possible that their euros had a different value than our euros.
The key point here is that we don’t know for sure, because it’s the same entity producing the good that’s being auctioned (that has a zero marginal cost) running the auction and choosing the winner, without a third party inspecting the process and certifying everything runs smoothly and fairly.
This reminds me when FTC and SEC had powers in case of abuses between banks’ retail and investment arms, one telling the other how good was their product and the price to pay. Some ex ante regulation works best…
I also think, in some cases, we will need some other provisions re. search…
As a general rule, IMHO, search functions should be separated and audited from other applications and the user should have the choice of selecting different search engines (this has already happened with browsers where you can choose a different search engine).
This is going to become the more evident with voice assistants.
When you search something on your screen, you get a list of results and, among those few, you choose which one suits you best to answer your need.
When you ask a voice assistant for something, you directly get a result, with no shortlist selection.
Tying the search function to the device, with no possible choice for the user to change it, creates a really strong lockin to the voice search engine. You would’t buy a full set of new voice assistants (several at home, car, etc.) just to replace the search engine.
When (eventually) antitrust would kick in, the durable effect on the market has already happened.
These devices would likely be considered as a specific market, separated from all other related and intertwined interests of the supplier (e.g cloud services and others) and potential remedies would have a very limited impact on the overall interest of the supplying company. The possible sanction would not be a disincentive to misbehave.
So we’ll need some ex ante measures as well.
Here all the documents published today.
I *love* the regulatory ladder.
some toughts with a friend…
Work used to value people for their muscles. Most of the value came from physical manipulation activities.
Then came the industrial revolution, with machinery reducing the predominance of the muscular effort.
And we used to value people for their brains. Most of the value came from their symbol manipulation activities.
Then came the digital revolution, with AI, Big Data and the Internet reducing the predominance of the brain effort.
Ad we will value people for their hearts. Most of the value will come from human relational activities.
We trained muscles for physical manipulations
We trained brains for symbol manipulations
We will need to train hearts for empathy and humanity
Questo forse spiega molte delle pubblicità che fanno vedere quanto è bello lavorare li.
Source : Vice
Secret Amazon Reports Expose the Company’s Surveillance of Labor and Environmental Groups
Dozens of leaked documents from Amazon’s Global Security Operations Center reveal the company’s reliance on Pinkerton operatives to spy on warehouse workers and the extensive monitoring of labor unions, environmental activists, and other social movements.
A trove of more than two dozen internal Amazon reports reveal in stark detail the company’s obsessive monitoring of organized labor and social and environmental movements in Europe, particularly during Amazon’s “peak season” between Black Friday and Christmas. The reports, obtained by Motherboard, were written in 2019 by Amazon intelligence analysts who work for the Global Security Operations Center, the company’s security division tasked with protecting Amazon employees, vendors, and assets at Amazon facilities around the world.
The documents show Amazon analysts closely monitor the labor and union-organizing activity of their workers throughout Europe, as well as environmentalist and social justice groups on Facebook and Instagram. They also reveal, and an Amazon spokesperson confirmed, that Amazon has hired Pinkerton operatives—from the notorious spy agency known for its union-busting activities—to gather intelligence on warehouse workers.
Internal emails sent to Amazon’s Global Security Operations Center obtained by Motherboard also reveal that all the division’s team members around the world receive updates on labor organizing activities at warehouses that include the exact date, time, location, the source who reported the action, the number of participants at an event (and in some cases a turnout rate of those expected to participate in a labor action), and a description of what happened, such as a “strike” or “the distribution of leaflets.” Other documents reveal that Amazon intelligence analysts keep close tabs on how many warehouse workers attend union meetings; specific worker dissatisfactions with warehouse conditions, such as excessive workloads; and cases of warehouse-worker theft, from a bottle of tequila to $15,000 worth of smart watches.
The documents offer an unprecedented look inside the internal security and surveillance apparatus of a company that has vigorously attempted to tamp down employee dissent and has previously been caught smearing employees who attempted to organize their colleagues. Amazon’s approach of dealing with its own workforce, labor unions, and social and environmental movements as a threat has grave implications for its workers’ privacy and ability to join labor unions and collectively bargain—and not only in Europe. It should also be concerning to both customers and workers in the United States and Canada, and around the world as the company expands into Turkey, Australia, Mexico, Brazil, and India.
Amazon intelligence analysts appear to gather information on labor organizing and social movements to prevent any disruptions to order fulfillment operations. The new intelligence reports obtained by Motherboard reveal in detail how Amazon uses social media to track environmental activism and social movements in Europe—including Greenpeace and Fridays For Future, environmental activist Greta Thunberg’s global climate strike movement—and perceives such groups as a threat to its operations. In 2019, Amazon monitored the Yellow Vests movement, also known as the gilet jaunes, a grassroots uprising for economic justice that spread across France—and solidarity movements in Vienna and protests against state repression in Iran.
Protesters from environmentalist groups including Extinction Rebellion, ANV-COP 21, Alternatiba, Attac block an Amazon depot in Saint Priest, near Lyon, France, on Black Friday 2019. (Nicolas Liponne/NurPhoto via Getty Images)
The stated purpose of one of these documents is to “highlight potential risks/hazards that may impact Amazon operations, in order to meet customer expectation.”
“Like any other responsible business, we maintain a level of security within our operations to help keep our employees, buildings, and inventory safe,” Lisa Levandowski, a spokesperson for Amazon told Motherboard. “That includes having an internal investigations team who work with law enforcement agencies as appropriate, and everything we do is in line with local laws and conducted with the full knowledge and support of local authorities. Any attempt to sensationalize these activities or suggest we’re doing something unusual or wrong is irresponsible and incorrect.”
Levandowski denied that Amazon hired on-the-ground operatives, and said that any claim that Amazon performs the described activities across its operations worldwide was “N/A.”
This may be one of the first examples of a changing game: Google was sanctioned because of its intermediation activity and as such, it was sanctioned where the customer of the promoted services is located.
Although the entity of the sanction is not disclosed and is likely to be rather small, it seems quite likely that they will appeal to the 2nd degree court. In Italy, wrt to the national regulation authority AGCOM, the appeal level is called TAR (Tribunale Amministrativo Regionale) and – if I’m not wrong – they have 60 days to file the appeal.
It seems to me that the provisions of the UE Regulation 2019/1150 are a solid legal base for sanctioning this type of behaviours, so I don’t see many chances that TAR will revert the decision.
It is a case that deserves to be followed carefully…
UPDATE: come qualcuno mi ha scritto nei commenti, effettivamente sovrastimavo i miei lettori (o meglio la percentuale di loro che fa skimming (legge il testo superficialmente) e non legge fino in fondo per capire che era un post sarcastico…
La situazione è veramente drammatica se una persona su 5 in Italia pensa che il virus sia fatto da Google ed Apple!
A costoro segnalo che non è nè iCovid nè Covid!
come sapete sono stato in parlamento nella scorsa legislatura eletto con la lista di Monti (che ho subito abbandonato).
adesso ho una notizia certa, da fonte affidabilissima e non posso proprio stare zitto
ecco cosa ho saputo:
- il virus tale non è, ma è solo un agente patogeno preparato secondo un piano dei poteri mondialisti per ridurre la popolazione e concentrare la ricchezza nelle mani di pochi ma la sua scarsa, per non dire nulla “efficacia” letale (sotto gli occhi di tutti, basta vedere le ambulanze costrette a girare vuote a sirene spiegate) ha costretto tutti ad un rapido cambiamento di programma
- l’agente quindi non è naturale ma effettivamente è stato realizzato in laboratorio in Cina, dove disponevano dei sistemi per bloccarlo, questo spiega perchè adesso l’emergenza in Cina e dintorni è già finita
- vista la reazione della Cina, Big Pharma ha tirato subito fuori i ‘vaccini’ che in realtà non sono altro che degli inertizzanti dei patogeni che erano stati diffusi (sappiamo bene che ci vogliono almeno 10 anni per fare un vaccino!)
- adesso il programma è preparare la prossima “pandemia” che sarà veramente letale, ma in modo mirato, personalizato sul DNA dei ‘bersagli’, grazie alla tecnica CRISPR
- per questo l’Organizzazione Mondiale della Sanità, notoriamente finanziata da Soros, Gates ed altri del Bilderberg, ha messo Monti a capo di una commissione per ripensare i sistemi sanitari futuri
- ed ecco la bomba: i governi stanno raccogliendo campioni di DNA della popolazione dalle goccioline di saliva che emettiamo nell’aria e si depositano sulle superfici; il piano è fare degli agenti patogeni che colpiranno solo persone della gente comune e non gli appartenenti alle elìte.
Fortunatamente esiste un modo semplice per proteggersi da questo piano criminale: Fino a quando non verranno distribuiti gli inertizzanti, mettiamo sempre una mascherina quando usciamo di casa e li freghiamo !
UPDATE: mi suggeriscono che anche lavare spesso le mani aiuta a limitare la quantità di materiale biologico che lasciamo sulle superfici da cui il governo ricava il nostro DNA. Un ultimo consiglio, stiamo alla larga dagli estranei, non possiamo essere certi che non siano agenti governativi mascherati.
(credits: Deutsche Bahn)
TL;DR I’m not arguing against contact tracing apps; I’m critical of Apple and Google’s approach.
In the past months our Iphones and Androids received operating systems updates, a joint effort by the two companies to help in the fight against COVID-19. The updates introduced new operating system functions for applications (APIs Application Programming Interfaces) to enable Contact Tracing apps developed by national health authorities. Without this update, Tracing Apps can’t work properly, mainly due to bluetooth power management by iphones.
When an iphone’s screen is off, it detects bluetooth beacons but does not emit them. It’s mute but not deaf. So, when an android meets a ‘mute’ Iphone, it cannot detect the contact. In a nutshell, Apple’s and Google’s APIs allow for a different management of the iphone’s Bluetooth to enable mutual detection of the contact also when the iphone’s screen is off.
The companies have decided that this functionality is reserved exclusively for States’ authorities.
This is the critical part: Apple and Google have agreed to provide States with a feature that bypasses the normal operations of the operating system and enables State Apps to do things the rest of developers are vetoed to.
Let’s forget the Covid emergency for a moment and reread the sentence above: functions on our devices that only State authorities can access.
Since this path has been taken, which reserved functions will States ask Google and Apple tomorrow ? Once this path has been taken, is it irrational to think that States will ask similar backdoors for surveillance purposes, obviously keeping them secret ?
Opening the source code of States’ apps won’t help much in inspecting what Apps really do. “Reproducible build” is a technical process to check if a source code corresponds with the Apps we download from the stores. Because of technical reasons, it is not possible to have an independent, byte-for-byte verification of all Apps using State reserved functions: The possibility of democratic oversight is undermined.
We welcome the effort of the duopolistic companies to make a contribution to public health. But this paradigm shift, making some functions reserved only to States’ authorities, is unacceptable. Reserving APIs to States is a paradigm echoing behaviors of totalitarian regimes, even worse of what George Orwell thought of.
We should welcome national health authorities who employ all available techniques in order to protect public health. The problem is not them using these APIs: they exist today, even if health authorities don’t use them.
The problem lies upstream; it lies in the concept itself that our devices, the homes of our digital dimension, provide services exclusively accessible to States’ authorities.
A striking difference with what William Pitt, 1st Earl of Chatham, who served as Prime Minister of Great Britain, said in 1763:
“The poorest man may in his cottage bid defiance to all the forces of the crown. It may be frail – its roof may shake – the wind may blow through it – the storm may enter – the rain may enter – but the King of England cannot enter.”
Source (Consciousness -> Meaning -> Symbols) -> Communication ->(Symbols -> Meaning -> Consciousness) Receiving
When a person communicates with another the process may be described as above:The receiver interprets the symbols she received, extracting a meaning that she internalizes with her consciousness. The source was conscious and had the intention of communicating and a meaning of what she wanted to communicate; she codified the meaning in symbols that have been communicated.
In this act of communication both source and receiving parties need to have some shared model to base the encoding and interpretation of the meaning; a shared context and culture (a common knowledge).
When a machine with an AI system generates a message (like in a chatbot) it generates it by algorithmically assemble symbols based on a statistical model. There is no consciousness, no meaning to form the intention of the communication.
The receiver would be wrong to assume that the machine intented to meant the information carried in the message. It’s a purely mechanical product.
It’s the receiving party that can attribute a meaning to the sybols, not the source.
Meaning is only in the eyes of the beholder.
Think of an abstract painter. She is concious and wants to commnicate something to the observers; she codifies the message in the painting but then eventually the observer looks at the painting and is unable to attribute a meaning; the observer just sees symbols and cannot extract a meaning.
This is symmetric to what happens with AI: the machine just mechanically assembles symbols and then the receiver attributes a meaning.
The machine does not know what the message means, it does not know it is producing a message; it does not even know that its output is a message and that someone will receive it.
With AI it’s like looking at rocks, stalagmites or clouds and seeing an animal.
The clouds don’t know it’s an animal; they don’t even know it’s a shape. They don’t know.
It’s just us, seeing the clouds who see an animal in them.
The important report is here.
It’s a very important and interesting reading.
A mini-synthesis of the findings in the 400 pages long report can be found in this twitter thread.
As many readers know, I have devoted a significant part of my public life to promote competition in the digital space (“dimensione immateriale”), have proposed comprehensive bills while I was in the parliament and wrote a book “Capitalismo Immateriale” (in english it’d be “Virtual Capitalism”, 5 stars rating in italy and a good number of reprints) explaining in great detail the business models and practices that this report has found applied by these big four.
These are few highlights:
In June 2019, the Committee on the Judiciary initiated a bipartisan investigation into the state of competition online, spearheaded by the Subcommittee on Antitrust, Commercial and Administrative Law. As part of a top-to-bottom review of the market, the Subcommittee examined the dominance of Amazon, Apple, Facebook, and Google, and their business practices to determine how their power affects our economy and our democracy. Additionally, the Subcommittee performed a review of existing antitrust laws, competition policies, and current enforcement levels to assess whether they are adequate to address market power and anticompetitive conduct in digital markets
To put it simply, companies that once were scrappy, underdog startups that challenged the status quo have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons. Although these firms have delivered clear benefits to society, the dominance of Amazon, Apple, Facebook, and Google has come at a price. These firms typically run the marketplace while also competing in it—a position that enables them to write one set of rules for others, while they play by another,or to engage in a form of their own private quasiregulation that is unaccountable to anyone but themselves. The effects of this significant and durable market power are costly. The Subcommittee’s series of hearings produced significant evidence that these firms wield their dominance in ways that erode entrepreneurship, degrade Americans’privacy online, and undermine the vibrancy of the free and diverse press. The result is less innovation, fewer choices for consumers, and a weakened democracy.
Nearly a centuryago, Supreme Court Justice Louis Brandeis wrote: “We must make our choice. We may have democracy, or we may have wealth concentrated in the hands of a few, but we cannot have both.”Those words speak to us with great urgency today. Although we do not expect that all of our Members will agree on every finding and recommendation identified in this Report, we firmly believe that the totality of the evidence produced during this investigation demonstrates the pressing need for legislative action and reform.
These firms have too much power, and that power must be reined in and subject to appropriate oversight and enforcement. Our economy and democracy are at stake.As a charter of economic liberty, the antitrust laws are the backbone of open and fair markets.
When confronted by powerful monopolies over the past century—be it the railroad tycoons and oil barons or Ma Bell and Microsoft—Congress has acted to ensure that no dominant firm captures and holds undue control over our economy or our democracy. We face similar challenges today.
Congress—not the courts, agencies, or private companies—enacted the antitrust laws, and Congress must lead the path forward to modernize them for the economy of today, as well as tomorrow. Our laws must be updated to ensure that our economy remains vibrant and open in the digital age.
Congress must also ensure that the antitrust agencies aggressively and fairly enforce the law. Over the course of the investigation, the Subcommittee uncovered evidence that the antitrust agencies failed, at key occasions, to stop monopolists from rolling up their competitors and failed to protect the American people from abuses of monopoly power. Forceful agency action is critical.
Lastly, Congress must revive its tradition of robust oversight overthe antitrust laws and increased market concentration in our economy. In prior Congresses, the Subcommittee routinely examined these concerns in accordance with its constitutional mandate to conduct oversight and perform its legislative duties. As a 1950 report from the then-named Subcommittee on the Study of Monopoly Power described its mandate: “It is the province of this subcommittee to investigate factors which tend to eliminate competition, strengthen monopolies, injure small business, or promote undue concentration of economic power; to ascertain the facts, and to make recommendations based on those findings.”
Now let me add a personal touch.
I started analyzing and understanding patterns echoing abusive behaviors when I had to close a startup I funded and founded more than a decade ago.
The dominant phone player at the time was european, the popular video format for small-screen handset was .3GP, UMTS was rolling out, smartphones where just born (but growing *fast*), the WII console was a hit and there were plenty of video sites.
My startup, based in the UK, developed a system that allowed a user to bookmark a video – any video, on any site – for later watching on any device the user owned (hiding all technical complexities such as format transcoding, synchronizing, etc.), allowing her to manage her playlists, share with a limited number of friends with a legal friendly “lending” mode, etc.
The smartphone app allowed for synchronization of the contents for offline viewing (there was not enough bandwidth for streaming and it was costly; there was no coverage in subways, trains or airplanes): when you arrived home/office, the app would recognize the wifi and start synchronizing all the videos to your device.
We even had a client for the WII so you could watch your playlists on TV (and on play stations as well).
We published the app on the store and downloads were ramping up very satisfactorily, and just by word of mouth.
There was a clear need of a service that allowed users to aggregate videos from one of the many possible sources(*) and watch them on any device, anywhere, online/offline.
(*) there was life beyond youtube, at the time.
Then our app was taken down from the store. And that happened time and again with motivations that seemed futile to me.
The app was clearly legit.
One day my CTO received a call which basically said that it was pointless for us to insist with publishing the app because they were supporting Youtube and didn’t want a competing app.
We discussed with the lawyers and for a while we unsuccessfully tried to get a written statement of what we’ve been told, to possibly start a legal action; Later, absent any proof of the facts above, I had no choice but to shut down the company.
I told my story to some friends and learned other similar stories; in one case I was told they were asked some million euros in order to let the app in the store.
I have no proofs, so I have been very careful to avoid references to a specific company.
this is a video of my system of the time.. (11 years ago)
Faccio mea culpa… Gli informatici come me spesso credono, in virtù della loro metodologia di approccio ai problemi, di sapere quale sia il processo migliore per la soluzionedel problema, di qualsiasi problema.
Quando poi le cose non hanno l’esito sperato, c’e’ sempre una colpa di qualcuno o un problema imprevisto ed inevitabile.
Con l’età poi si capisce che in genere i problemi avevano degli aspetti taciti, non formalizzati, ecc. che, se fossero stati presi nella giusta considerazione all’inizio, avrebbero mitigato la sicumera, che la funzione obiettivo che si cercava di indirizzare non era quella appropriata, ecc.
Opinion: AI For Good Is Often Bad
After speaking at an MIT conference on emerging AI technology earlier this year, I entered a lobby full of industry vendors and noticed an open doorway leading to tall grass and shrubbery recreating a slice of the African plains. I had stumbled onto TrailGuard AI, Intel’s flagship AI for Good project, which the chip company describes as an artificial intelligence solution to the crime of wildlife poaching. Walking through the faux flora and sounds of the savannah, I emerged in front of a digital screen displaying a choppy video of my trek. The AI system had detected my movements and captured digital photos of my face, framed by a rectangle with the label “poacher” highlighted in red.
Mark Latonero (@latonero) is a fellow at the Harvard Kennedy School’s Carr Center for Human Rights Policy and a research lead at Data & Society.
I was handed a printout with my blurry image next to a picture of an elephant, along with text explaining that the TrailGuard AI camera alerts rangers to capture poachers before one of the 35,000 elephants each year are killed. Despite these good intentions, I couldn’t help but wonder: What if this happened to me in the wild? Would local authorities come to arrest me now that I had been labeled a criminal? How would I prove my innocence against the AI? Was the false positive a result of a tool like facial recognition, notoriously bad with darker skin tones, or was it something else about me? Is everyone a poacher in the eyes of Intel’s computer vision?
Intel isn’t alone. Within the last few years, a number of tech companies, from Google to Huawei, have launched their own programs under the AI for Good banner. They deploy technologies like machine-learning algorithms to address critical issues like crime, poverty, hunger, and disease. In May, French president Emmanuel Macron invited about 60 leaders of AI-driven companies, like Facebook’s Mark Zuckerberg, to a Tech for Good Summit in Paris. The same month, the United Nations in Geneva hosted its third annual AI for Global Good Summit sponsored by XPrize. (Disclosure: I have spoken at it twice.) A recent McKinsey report on AI for Social Good provides an analysis of 160 current cases claiming to use AI to address the world’s most pressing and intractable problems.
While AI for good programs often warrant genuine excitement, they should also invite increased scrutiny. Good intentions are not enough when it comes to deploying AI for those in greatest need. In fact, the fanfare around these projects smacks of tech solutionism, which can mask root causes and the risks of experimenting with AI on vulnerable people without appropriate safeguards.
Tech companies that set out to develop a tool for the common good, not only their self-interest, soon face a dilemma: They lack the expertise in the intractable social and humanitarian issues facing much of the world. That’s why companies like Intel have partnered with National Geographic and the Leonardo DiCaprio Foundation on wildlife trafficking. And why Facebook partnered with the Red Cross to find missing people after disasters. IBM’s social-good program alone boasts 19 partnerships with NGOs and government agencies. Partnerships are smart. The last thing society needs is for engineers in enclaves like Silicon Valley to deploy AI tools for global problems they know little about.
The deeper issue is that no massive social problem can be reduced to the solution offered by the smartest corporate technologists partnering with the most venerable international organizations. When I reached out to the head of Intel’s AI for Good program for comment, I was told that the “poacher” label I received at the TrailGuard installation was in error—the public demonstration didn’t match the reality. The real AI system, Intel assured me, only detects humans or vehicles in the vicinity of endangered elephants and leaves it to the park rangers to identify them as poachers. Despite this nuance, the AI camera still won’t detect the likely causes of poaching: corruption, disregarding the rule of law, poverty, smuggling, and the recalcitrant demand for ivory. Those who still cling to technological solutionism are operating under the false assumption that because a company’s AI application might work in one narrow area, it will work on a broad political and social problem that has vexed society for ages.
Sometimes, a company’s pro-bono projects collide with their commercial interests. Earlier this year Palantir and the World Food Programme announced a $45M partnership to use data analytics to improve food delivery in humanitarian crises. A backlash quickly ensued, led by civil society organizations concerned over issues like data privacy and surveillance, which stem from Palantir’s contracts with the military. Despite Palantir’s project helping the humanitarian organization Mercy Corps aid refugees in Jordan, protesters and even some Palantir employees have demanded the company stop helping the Immigration and Customs Enforcement detain migrants and separate families at the US border.
Even when a company’s intentions seem coherent, the reality is that for many AI applications, the current state of the art is pretty bad when applied to global populations. Researchers have found that facial recognition software, in particular, is often biased against people of color, especially those who are women. This has led to calls for a global moratorium on facial recognition and cities like San Francisco to effectively ban it. AI systems built on limited training data create inaccurate predictive models that lead to unfair outcomes. AI for good projects often amount to pilot beta testing with unproven technologies. It’s unacceptable to experiment in the real world on vulnerable people, especially without their meaningful consent. And the AI field has yet to figure out who is culpable when these systems fail and people are hurt as a result.
This is not to say tech companies should not work to serve the common good. With AI poised to impact much of our lives, they have more of a responsibility to do so. To start, companies and their partners need to move from good intentions to accountable actions that mitigate risk. They should be transparent about both benefits and harms these AI tools may have in the long run. Their publicity around the tools should reflect the reality, not the hype. To Intel’s credit, the company promised to fix that demo to avoid future confusion. It should involve local people closest to the problem in the design process and conduct independent human rights assessments to determine if a project should move forward. Overall, companies should approach any complex global problem with the humility in knowing that an AI tool won’t solve it.
5G is fundamental for the economic development of the country. Yes, but why?
Because of its lower latency?
Because it will allow us to talk to the fridge and to know if the yogurt has expired?
To enable the circulation of self-driving cars or to allow remote surgical operations?
Let’s try to shed some light, let’s understand – beyond the hype – why the real “killer application” of the new mobile generation, for users, will probably not be any of those advertised so far.
5G, some technical elements
Let’s remember that the promise of the distinguishing features of 5G relates to a lower energy consumption for the same bits transmitted (which is ecologically commendable) coupled with higher “bandwidth” (throughput) with lower latency; all with greater capillarity and lower emissions.
To understand throughput with a simple analogy, think of a fireman’s hose and a straw: the throughput is the amount of water that comes out of the pipe per time unit. It depends on many factors such as the section of the hose, the water pressure and, last but not least, how much water per second the aqueduct is able to actually deliver to the hose.
Latency is a measure of how long it takes for the water molecule entering the pipe to exit on the other side.
4G – LTE-A
50Mbps e oltre
Table 1 – Throughput (” bandwidth “) of the various standards (indicative values)
(*) without congestion
Table 2 – Latency (milliseconds) of the various standards (indicative values)
It is often said that 5G will have a latency of 1ms. This is a limit, under laboratory conditions that will not be experienced in practice.
We must also ask ourselves “latency to get where?”. In order for our bit inserted in the network in Rome to get out in Sao Paulo, Brazil, even at the speed of light and with few switching devices in between, requires over 100ms (Sao Paulo is far away!). So actually the latency will go from aprox 150ms with 4G to about 120 ms with 5G. It may seem a small reduction, but it’s actually a big gain.
If we consider that most of the contents and services that we usually access are not located on the opposite side of the world but in a datacenter near to us, we can understand that when accessing blog.quintarelli.it the latency will be nearly halved. If then we put a myriad of servers, with replicated content, in each city, then the latency could fall even beyond the values indicated in the table.
Low latency, for which business reason ?
What type of business would need such a low level of latency to justify such a large and widespread server infrastructure? Certainly not streaming video – it is not dramatic to wait 30milliseconds more for a movie to start – but for example interactive games that require fast reaction times (the reaction times of sportsmen vary between 100ms of a world record-breaking 100meter and 250ms of a “normal” professional). So a professional gamer connected to a 5G access would have an advantage over other professionals using 4G, a difference that could earn him the first step of the podium. (Of course, not all of us are professional gamers).
All this, provided that the rest of the network is carefully dimensioned and has no other bottlenecks (congestion) other than the access segment. And this is rarely the case.
While I’m writing, a simple ping command (which provides an estimate of round-trip latency) to a server located at the heart of the Italian network, from my PC reports a minimum value of 46ms and maximum of 68ms, with a variability of about 50%. (long live the smartworking! With so many people normally in the office contending the available network capacity, often the variability is larger).
There is then a further element to consider, that is where to put the application: let’s think about a virtual reality or augmented reality helmet: when the wearer turns his head, in order not to feel a motion sickness, it is important that the sliding of the images seen by the eye accompanies the sliding sensation determined by the vestibular apparatus in our inner ear. Having a low latency can be very useful to mitigate this annoying sensation.
The closer the application is to the viewer, the better.
Bringing applications as close as possible to users is a paradigm that is called edge computing. Some hypothesize to deploy myriads of small datacenters close to the antennas. But what would be the application that requires such low latency, staying within the operator’s business perimeter, in order to justify these investments? (that is to justify a sufficient number of users that pay something more than the traditional price in order to benefit from this latency reduction?).
The question is not trivial, especially if you consider that the most extreme level of edge computing is to put the computer in the user’s home. If the application was delivered from a home console connected with a cable (or an ultrawideband wireless link) to the helmet, it would be (with a few exceptions) even better than if the application was delivered from a server near the network border (near the radio access portion of the network). It’d be even better if the application was directly in the helmet.
So what is 5G really for?
Some TV commercials promote the image of 5G-enabled drones delivering cups of “frappuccinos” to our door, children entertaining themselves with dinosaur holograms popping up in the garden, advanced telemedicine, so much so that a surgeon can perform a surgery while waiting for a wedding, cars driving alone in the city and spectators watching basketball games circling their point of view around the players on the field.
But are these the applications of 5G? Searching online you will find interesting ideas for less extreme use than TV advertising. I comment some of them:
- “5G will make life easier for robots, that is, the world called the Internet of Things. Things, from traffic lights to the fridge at home, always connected to the network. The goal of 5G is basically to propose in mobility all those essential features present in fixed networks, but with a latency even lower than cable connections”.
Traffic lights and refrigerators in mobility. But then are we sure that they need all this large bandwidth and that 20ms less are decisive to decide to switch from green to yellow or to tell us that the yogurt is running out?
- “…will also allow musicians, singers and artists who are in different places to interact in a single show”.
A reduced latency, accompanied by low-latency interfacing circuits of musical instruments, will probably allow people to play musical instruments together remotely. The task is not trivial and also depends to some extent on the type of music played, but this is a use case that seems likely, even if it is not yet the killer application of 5G.
- “This connection will be important in the medical field for the new connected ambulances that will be able to give a lot of information to the hospital even before the patient arrives. Planned uses are in telemedicine and rehabilitative robotics with the possibility to use IoT devices and remote control”.
Of course we are talking about all those ambulances that will have on board diagnostic imaging systems (CT, MRI) that must transmit many Gigabytes in a few seconds, yesssss….. because certainly to provide saturation informations and ECG requires just a few bytes and during the lockdown we all got used to make video conferences… Likewise for physiotherapeutic rehabilitation: clearly some believe that the electromechanical machines (that today are found only in specialized centers), if used at home, need much more than the 50Mbps that a current LTE can give and you can not do without a latency as low as hundred meter professional runner latency to be able to control the patient is bending a knee.
- “In the field of security and video surveillance will be used high resolution camera with 5G support, installable in stations and crowded places”.
4K video requires about 16Mbps, 8K video (7680 × 4320) will realistically require about 40-50Mbps. Nothing a decent landline access can’t support. Of course, you can’t watch it if you’re not at home.
- “It will be also usefulin tourism and journalism, the latter thanks to 5G will be able to provide more timely images and news in every part of the city”.
Which, as we know, is the core problem of journalism today…
- “5G also brings benefits to self-driving cars, as they will be able to communicate in real time with the road infrastructure and obtain important information for road safety and security”.
One wonders how autonomous cars are doing today. Do we really need that latency reduction to signal positions and obstacles? Even assuming that this is indeed the case, that a very reduced latency is absolutely necessary, then wouldn’t it make more sense to make direct connections between cars, without passing through a network that, however, adds some latency?
As you know, I am very skeptical about the concrete possibility of having cars autonomously whizzing through the cities because of safety problems related to an uneven and above all unmonitored environment. A differrent story are highways that are homogeneous and very well supervised environments. We will probably see autonomous driving on the highway, probably with reserved lanes and more than intervehicular communication we may need a lot of mobile bandwidth to be able to work or entertain ourselves during our travels.
- “Even smart homes will benefit from 5G, all objects in the home will be able to communicate with each other, receive information from the outside and be controlled remotely from a single device”.
What is it that can’t be done today with fixed network access and wifi? (hoping that the device that remotely controls our home belongs to us…) Are we sure that the complexity of Narrowband-IoT (NB IoT) and the 5G version will prevail over wifi and bluetooth? It seems to me that the probability that this will happen in the short to medium term is quite low, unless some black swan gives the necessary impulse.
- “An ultra-performing network, will be fundamental for the transition to the Internet of things, i.e. to ensure the development of applications and services for the Smart City based on sensors (for example for traffic control, waste collection, urban lighting, logistics)”.
In 20 milliseconds, many cars pass through street intersections and a lot of waste is thrown. If a lamp burns, then, we really know first!
- “The new, very fast 5G mobile networks will also help improve safety on the roads. In addition to allowing more data to be transferred in the same unit of time, they have a much lower latency and a reduced error rate: if one data packet in a thousand is “lost” with 4G, with 5G you get up to one in a million. It is these last two aspects that will make the electronic security systems of cars more effective, which will “dialogue” with each other in real time and with the certainty that the information arrives”.
About the latency I mentioned above; about the loss of packets, as people who deal with networks know, network protocols (TCP) have control mechanisms to ensure applications that all transmitted packets arrive. This aspect, therefore, is already insurable to current technologies.
- “The productive world will be revolutionized through the full digitalization of production facilities – the so-called Industry 4.0 – and the development of precision agriculture”.
Indeed in the business market there could be an interest in 5G for IIoT (Industrial IoT), unlike the residential market. The 5G will allow a higher communication density up to one sensor per square meter, that is one million sensors per square kilometer compared to the 50 thousand allowed by the best NB IoT technologies. We must also bear in mind that in the industrial field wired network technologies are very expensive (PROFINET cables and switches cost one order of magnitude more than the non-industrial equivalents) and introduce rigidity in installations that could be more flexibly realized and reconfigured using wireless connections. (examples 1, 2, 3, 4)
Be careful! I’m not saying that 5G doesn’t help, but that the “killer applications” that are proposed today (which often include “life saving” arguments, because on those you can’t squeeze investments), with the notable exception of the business market and IIoT, are generally trivial and almost certainly destined not to materialize.
Figure 1 – Piazza Maggi, Milan
This does not mean that a new network infrastructure should not be built, on the contrary! The first motivation for a new infrastructure is that it must precede the demand and enable it: the new infrastructure will serve a demand that is not there today. Einaudi, a very famous italian economist and politician, said that markets expresses demands, not needs, meaning they express immediate requirements and not long-term needs.
Also from this point of view the idea of pooling investments to co-invest in a 5G network infrastructure seems reasonable.
The networks they are a changin’
The “mobile” networks (which are not mobile: it is people who are mobile, not the networks) work by emitting signals that fade as you move away from the antenna, so that phones and antennas have to transmit with increasing power to be able to communicate, like two people talking to each other by moving away from each other. Or, another solution, is to place many more antennas that will be much closer to the user so that they will be able to communicate with lower power; the lower the power, the the larger the number of antennas that are needed. This is what has happened with mobile telephony: at every stage of evolution from GSM to UMTS (or 3G) to HSDPA (or 3.5G) to LTE (or 4G) emissions decrease and antenna density increases.
Consequently, the wireless access segment (the part from the user to the antenna) tends to become shorter: from the many kilometers of GSM (with low bandwidth) to a few tens of meters with Wi-Fi and 5G (which provide a lot of bandwidth).
With 5G, in Italy we will have hundreds of thousands of small antennas, with very low emissions (also because we have the lowest emission constraints in the world). In the future we will find them at the base of many buildings and they will provide us ubiquitously performances similar to Wi-Fi, wherever we go.
To each of these antennas we will need to feed some bandwidth with a network connection hookup (usually fixed). For this reason it is of obvious the importance of the presence of fixed networks, whether they are via the “old cable TV” as in some European countries or telephone networks with optical fibers. But it is good to overcome also this conceptual differentiation. The network is always and only one: it serves to carry data of any type. It does not matter if it was born for the telephone (copper pair) or for TV (cable networks) or if it is made with optical fibers.
Such a high density of antennas means that each antenna will be used by fewer people. If an antenna covers a radius of two kilometers, all users in a small town will connect to it. If an antenna covers a twenty meter radius, only the few people in that building will connect to it.
The transmission capacity available from an antenna is shared with the people who connect to it. In the case of the small town, it will be shared among many people; in the second case it will be shared among very few people so that each individual in the second scenario will have much more bandwidth available than their peers in the first scenario. Increasing the capillarity of the antennas helps to reduce emissions and increase the performance available to users.
Lower emissions thanks to 5G
It seems a counter-intuitive thing: lower electromagnetic emissions with more antennas; higher capacity with lower emissions. An example helps to clarify: imagine a room with forty people and two of them talking with a megaphone. The noise pollution will be very high and the overall capacity of the room will be just one conversation. If everyone whispers, the noise pollution will be minimal and the overall capacity will be twenty simultaneous conversations.
Ultimately, the network is not what we are used to think it is: a single object managed by an operator. Instead, it is a mix of different types of routes, with different technologies, much likey a road network.
From the point of view of the user, in the future there will not be a great distinction between fixed network and wireless network. The fixed network will have a very high capillarity and at its edge there will be an antenna. If located inside a house, it will still be a Wi-Fi managed independently by the most sophisticated users, if it will be located outside of the house it will instead be a 5G network managed by an operator.
A mental image of networks
Let’s picture circles to imagine service boundaries of an operator.
Today there is a circle that reaches the users’ home, the fixed network, to which a Wifi access point is connected; it can be right next to each side of the circumference, depending on whether it is provided and managed by the operator (inside the circle) or installed directly by the user (outside the circle). Outside this circle there are the user’s devices, from PCs to smart refrigerators (!?) to TVs connected to disks to store documents, photos and movies (and managing their complexity).
Then there is another circle, the “mobile network” one, which radius is smaller. It’s the network that powers the cellular radio stations located right inside the edge of the circle, inside the perimeter of the operator, to which users connect with their mobile devices.
Over time this second circle has expanded and with the 5G it will come close to the border of the first circle, so close that many users will find it more convenient to directly connect their devices, entrust the operator with the custody of their documents, photos and movies (without having to manage the complexity) reducing its management burden, often increasing the level of security and being able to have everywhere what would otherwise only be available at home / office.
These two circles, the fixed network and the mobile network have grown closer over time and will continue to do so with 5G; there is a large overlap in the surface area of the two circles. The more widespread the “fixed” network is developed, the more it will also contribute to the infrastructure for 5G.
The telephonist’s drama
Let’s go back to the use cases: many of those told, especially those that are presented as useful to save lives, suffer from what we could call “the telephonist’s drama”. I call it this way because i often perceive it when talking to many friends, very competent people, who work for telco operators who, being born as telephone operators, keep that imprinting in their DNA and seem to me they still are a bit conditioned by that way of thinking.
An example of a typical “telephone” way of thinking is that of value added services.
Let’s imagine a system to improve driving safety, based on a myriad of temperature and humidity sensors scattered along a mountain road so that a car that arrives that behind the blind curve can be informed there is a high risk of icing.
This is a typical use case in 5G narratives.
Let’s ask ourselves: did Vodafone or Telefonica put those sensors? If Vodafone put them, will they be accessible to cars connected to Telefonica’s network? And to those connected to Iliad’s ?
If we were in a monopoly situation, the problem would not arise: the same single operator puts the sensors and connects the cars.
But in a competitive situation, how can it be solved ? All mobile operators sign service interoperability agreements (and payment settling) whereby one operator gives the other access to the information of the sensors that he has placed in street X, a second operator gives access to the information for street Y to the first operator and then periodically they make a balance to see who owes the other how much money, after compensation.
You can bet that, while they are busy negotiating multiple bilateral service roaming agreements, someone will come along who will install sensors that will communicate via the Internet, taking away these possible services from the operators and making them over the top. And remunerating them with advertising, credit card or freemium models.
Does it recall something ?
In order to avoid a similar scenario, the operators should probably make a consortium and create a joint over the top service provider, presenting themselves on the market as a single integrated entity.
I doubt it will ever happen. Genetic imprinting is hard to die. Consider that in the 2020’s 5G WhitePaper issued by the NGMN Alliance (Alliance for Next Generation Mobile Networks, an association of operators, vendors, manufacturers and research institutes operating in the mobile phone industry) the word “Internet” appears only once (just once!), in the glossary, in the definition of “IIoT: Industrial Internet of things”.
What fate for 5G networks?
I have to make a major premise: the operators’ business cases to sustain the realization of widely spread 5G networks, with myriads of access points, are anything but obvious. It is likely that we will see a gradual roll-out, starting where the market can reward technology, i.e. in the business market, especially in industry.
As far as the residential market is concerned, the destiny of 5G use, for a growing user base, could lead to a reduction in the myriad of WiFi access points, now bought everywhere and by anyone, installed by anyone and uninspected for quantity and quality of their emissions. For many users 5G could greatly reduce their need of a fixed hookup to their homes, replacing the last meters of WiFi access with the last tens of meters of 5G.
TodayAs today, the way we use the network at home or on the move, are different: at home we have our files, backups, and our devices that (in general) we cannot access when we are on the move. A scenario like the one described could allow the members of a family to benefit from all their data in mobility, anywhere, as if they were at home. Perhaps this could be a killer app (that might be of interest to operators, though in a clash with OTT and device manufacturers who already provide these services, although with a looser billing relationship with the customer ).
However, there is a fundamental consideration that must be made: history teaches us that at home as at work, on the internet, people want to do more and more things in less and less time. So the traffic increases in a sensitive and inexorable way.
It is true that in nature there are no exponential curves that grow forever (sorry Mr. Kurzweil!), that all exponential curves sooner or later become logistic curves and flatten. There is certainly an asymptote of how much bandwidth a user can consume. My impression is that we are still very far from reaching it. This seems to be confirmed also by the recent actual usage data trends: the bandwidth used in Italy (and in the world at large) is still growing by ca. 50% per year with a comparable number of users.
As Alfonso Fuggetta said with a lucky expression, one cherry pulls the other, i.e. we start with simple things and then we make more and more, more and more complex, more and more requirements and more and more bandwidth. And at the same time our expectation of ever shorter reaction times increases.
Ultimately, for users, the killer app is still human impatience.
Il Ministero ha allestito 140 sale Centro Intercettazioni Telecomunicazioni (CIT) con rete dedicata e cablaggio e dotazione dedicata di Pc portatili. In ogni sala CIT è stato inoltre installato il server ministeriale e realizzato il software per la gestione dell’archivio digitale multimediale e per l’archivio documentale; 60 milioni di euro sono gli investimenti già spesi per le infrastrutture tecnologiche, per le opere murarie e per gli acquisti necessari; 700 i server e i rack dedicati alle sole intercettazioni; oltre 1100 i PC dedicati e destinati alle sale d’ascolto; circa 3.500 le persone coinvolte nella formazione specifica (personale amministrativo, magistrati e polizia giudiziaria).