XML RSS Search

Next Page: 20

          New deepfake generation and detection methods signal new AI arms race      Cache   Translate This Page   
New methods of creating deepfakes are being developed with advanced artificial intelligence techniques, along with promising detection solutions. Fake people...
          Elon Musk and an Artificial Intelligence Too Dangerous to Be Made Public      Cache   Translate This Page   
Elon Musk and an Artificial Intelligence Too Dangerous to Be Made Public Brett Tingley The latest terrifying artificial intelligence dev
          America vs 'Made in China 2025': US-China talks aim to end damaging trade war      Cache   Translate This Page   
China economy expert Dinny McMahon joins PM to discuss the US and China conflict on the future of artificial intelligence and intellectual property.
          Emerging Tech / Artificial Intelligence Conference – Gainesville, FL      Cache   Translate This Page   
The University of Florida Philosophy Department will host a conference entitled Promise and Problems in Emerging Technology: Shaping the Societal Impact of Artificial Intelligence. The conference is co-sponsored by the...
          Facebook Acted Like ‘Digital Gangsters,’ British Parliament Says      Cache   Translate This Page   

Facebook has been allowed to act like “digital gangsters,” a blistering report from a British parliamentary committee said on Monday, by overlooking data privacy laws in order to expand its massive advertising business.

The report, put together over an 18-month period by the U.K. Digital, Culture, Media and Sport Committee, called for new laws to regulate the tech industry — something lawmakers both in the U.S. and abroad have been increasingly calling for since Facebook’s Cambridge Analytica data leak was discovered last year.

“Companies like Facebook should not be allowed to behave like ‘digital gangsters’ in the online world, considering themselves to be ahead of and beyond the law,” the report said. Facebook, according to the report, has “intentionally and knowingly” skirted consumer-protection and digital privacy laws as its grown into a business pulling in more than $50 billion in revenue per year.

Also Read: 'Gay Muslim' Instagram Account Vanishes After Pressure From Indonesian Officials

Facebook did not immediately respond to TheWrap’s request for comment, but a company spokesperson said it was open to “meaningful regulation” in a statement to The New York Times.

“While we still have more to do, we are not the same company we were a year ago,” Karim Palant, a public policy manager for Facebook in the U.K., told The Times. “We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.”

The committee used documents obtained by Six4Three, a small company that is suing Facebook in California after its app, which allowed friends to find pictures of their friends in bikinis, was decimated after Facebook changed its third-party sharing policies in 2015. The court documents, according to the committee, show Facebook was “willing to override its users’ privacy settings in order to transfer data” to app developers.

Also Read: Facebook Removes 783 Iranian Accounts for 'Coordinated Inauthentic Behavior'

“Facebook’s handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms,” the report added.

The report recommended the creation of a British watchdog to keep tabs on the entire tech industry. The committee doesn’t have the ability to create new laws, but chairman Damian Collins told The Times he hopes the recommendations are accepted by Parliament. The report comes at the same time Facebook is negotiating with the U.S. Federal Trade Commission on a multi-billion settlement to close its investigation int privacy violations.

You can read the full report here.

Related stories from TheWrap:

Google and Facebook's Complete and Total Digital Ad Domination Explained in 1 Chart

Facebook Removes 783 Iranian Accounts for 'Coordinated Inauthentic Behavior'

Facebook Drops App That Paid Teens $20 for Full Access to Their Phones

          Fake News by Robots? Axios Uses AI Program to Write ‘Not True’ Story in Experiment      Cache   Translate This Page   

Axios offered a glimpse of the possible brave new world of journalism over the weekend, publishing a story written almost entirely by artificial intelligence.

The story — which site tech reporter Kaveh Waddell stressed was “not true” — still read smoothly and would have been largely indistinguishable from other content written by Axios’ human reporters.

“I wrote the first two sentences in italics below — they’re from a story we published in Monday’s Axios Future newsletter,” Waddell wrote. “Then, a new computer program created at OpenAI wrote the rest, on the first try.”

Also Read: Axios Co-Founder Calls on Media to Ban Most Social Media Use by Reporters

The story itself — which, again, is fake — focused on “a new AI strategy document” which “speaks in stark terms of a ‘destabilizing’ Chinese threat.”

“It warns of a ‘new arms race in AI’ and says the United States ‘will not sit idly by’ as a ‘highly advanced new generation of weapons capable of waging asymmetric warfare’ is ‘possessed by aggressive actors,'” it reads.

Advances in artificial intelligence have led to disruptions in many industries in recent years, with machines and automated kiosks replacing human workers at a steady clip. One report from the McKinsey Global Institute has warned that by 2030, machines could wipe out more than 800 million jobs worldwide.

Also Read: Bloomingdale's Apologizes Over 'Fake News' T-Shirt, Pulls It From Stores

The Axios story suggests journalists too might not be immune to the wipeout either. For years programmers have been tinkering around the margins and have used AI to successfully to create content.

Back in 2014, the Los Angles Times made headlines after revealing that a robot was used to write a breaking news story about a small earthquake in the city, making the paper first out with its report.

Related stories from TheWrap:

Axios Co-Founder Calls on Media to Ban Most Social Media Use by Reporters

HBO and Axios Team up for Docuseries on 2018 Midterm Elections

Secret Service Calls Fake News on Axios Report, Didn't Tackle Chinese Official Over Nuclear Football

Axios Raises Additional $20 Million to Fund Newsroom Expansion

          Google Translate is a manifestation of Wittgenstein’s theory of language      Cache   Translate This Page   
Quartz: “More than 60 years after philosopher Ludwig Wittgenstein’s theories on language were published, the artificial intelligence behind Google Translate has provided a practical example of his hypotheses. Patrick Hebron, who works on machine learning in design at Adobe and studied philosophy with Wittgenstein expert Garry Hagberg for his bachelor’s degree at Bard College, notes […]
          Intellipedia Entries      Cache   Translate This Page   

Background Intellipedia is an online system for collaborative data sharing used by the United States Intelligence Community (IC).  It was established as a pilot project in late 2005 and formally announced in April 2006 and consists of three wikis running on JWICS, SIPRNet, and Intelink-U. The levels of classification allowed for information on the three wikis are Top Secret, Secret, and Sensitive But Unclassified/FOUO information, respectively. They are used by individuals with appropriate clearances from the 16 agencies of the IC and other national-security related organizations, including Combatant Commands and other federal departments. The wikis are not open to the public. Intellipedia is a project of the Office of the Director of National Intelligence (ODNI) Intelligence Community Enterprise Services (ICES) office headquartered in Fort Meade, Maryland. It includes information on the regions, people, and issues of interest to the communities using its host networks. Intellipedia uses MediaWiki, the same software used by the Wikipedia free-content encyclopedia project.  ODNI officials say that the project will change the culture of the U.S. intelligence community, widely blamed for failing to “connect the dots” before the September 11 attacks. The Secret version connected to SIPRNet predominantly serves Department of Defense and the Department of State personnel, many of whom do not use the Top Secret JWICS network on a day-to-day basis. Users on unclassified networks can access Intellipedia from remote terminals outside their workspaces via a VPN, in addition to their normal workstations. Open Source Intelligence (OSINT) users share information on the unclassified network.  (Source: Wikipedia)  Intellipedia Manual, 2007 Edition, Released in Full [17 Pages, 1.73MB] – After appealing this 2009 request for a FULL denial, the document was sent to the Department of State, and finally released in full in October of 2013.  Intellipedia MAIN PAGE, Released September 2017 (printed and reviewed on June 14, 2016) [16 Pages, 2.3MB] – Individual Entries and Category Pages: Abdullah Gul [ 7 Pages, 1.1MB | a/o 08/2018 ] – Discovered while doing a search for “Deep State.” This was the only responsive record. Able Archer 83 [ 10 Pages, 1.9MB | a/o 06/2017 ] Actors of Concern [ 10 Pages, 1.4MB | a/o 12/2017 ] Adel al-Jubeir [ 15 Pages, 1.9MB | a/o 05/2018 ] American Political Scandals (Category) [ 6 Pages, 1.1MB | a/o 9/2017 ] Anthrax Disease [ 6 Pages, 3.3MB | a/o 01/2019 ] Anti-Satellite Weapon [ 24 Pages, 3.6MB | a/o 12/2018 ] Apocalypse (Category) [ 3 Pages, 0.7MB | a/o 10/2018 ] Area 51 – Dining Facility ARMY/INSCOM Release [ 3 Pages, 1.1MB | a/o 11/2016 ] Area 51 – Military Base NSA Release [ 7 Pages, 1.4MB | a/o 06/2018 ] Artificial Intelligence [ 46 Pages, 8MB | a/o 02/2017 ] Attack on Pearl Harbor [ 15 Pages, 2.7MB | a/o 06/2017 ] Ballistic Missile Early Warning System [ 19 Pages, 10.2MB | a/o 01/2019 ] Bay of Pigs [ 8 Pages, 0.6MB | a/o 08/2014 ] Benghazi [ 9 Pages, 0.9MB | a/o 05/2016 ] Bent Spear [ 5 Pages, 0.6MB | a/o 12/2015 ] Bill Clinton [ 35 Pages, 6.3MB | a/o 05/2017 ] Broken Arrow [ 5 Pages, 0.6MB | a/o 12/2015 ] Camp X [ 5 Pages, 0.6MB | a/o 07/2016 ] Carl Jung [ 15 Pages, 2.0MB | a/o 09/2018 ] Central Intelligence Agency (CIA) [ 19 Pages, 2.4MB | a/o 03/2018 ] Church Committee [ 14 Pages, 2.8MB | a/o 07/2017 ] Climate Change [ 20 Pages, 0.6MB | a/o 05/2017 ] Cloud Seeding [ 17 Pages, 2.9MB | a/o 07/2017 ] Cold Fusion [ 20 Pages, 4.9MB | a/o 03/2017 ] Cold War (Category) [ 9 Pages, 1.2MB | a/o 08/2018 ] COINTELPRO [ 10 Pages, 2.0MB | a/o 12/2016 ] Commercial Passenger Spaceflight [ 16 Pages, 1.9MB | a/o 09/2018 ] Communications Instructions for Reporting Vital Intelligence Sightings [ 3 Pages, 2.0MB | a/o 07/2018 ] Confirmation Bias [ 15 Pages, 2.5MB | a/o 05/2018 ] Continuity of Government [ 7 Pages, 1.7MB | a/o 11/2016 ] Corona – NSA Release – [ 11 Pages, 2.5MB | a/o 06/2017 ] Corona (Satellite) – NRO Release – [ 21 Pages, 3.19MB | a/o 02/2014 ] Crisis in Qatar [ 15 Pages, 1.9MB | a/o 05/2018 ] Daniel Ellsberg [ 20 Pages, 3.2MB | a/o 07/2017 ] Defense Advanced Research Projects Agency (DARPA) Category [ 5 Pages, 1.9MB | a/o 01/2019 ] Defense Advanced Research Projects Agency (DARPA) [ 12 Pages, 1.8MB | a/o 12/2018 ] Defense Planning Guidance [ 10 Pages, 1.4MB | a/o 12/2017 ] Diplomatic Incidents (Category) [ 6 Pages, 0.8MB | a/o 9/2017 ] DISHFIRE [ 4 Pages, 0.8MB | a/o 12/2018 ] Donald Trump NSA Release #1 [ 8 Pages, 1.9MB | a/o 03/2017 ] Donald Trump NSA Release #2 [ 8 Pages, 1.9MB | a/o 07/2017 ] Dull Sword [ 5 Pages, 0.6MB | a/o 12/2015 ] Earth [ 19 Pages, 2.8MB | a/o 09/2018 ] Echelon [ 2 Pages, 3.19MB | 03/2014 ] – The FOIA for this entry received the “GLOMAR Response.” Edward Snowden [ 5 Pages, 3.19MB | 09/2016 ] – I also requested entries for The Intercept and Glenn Greenwald, both of which do not exist (allegedly). Elbonia [ 10 Pages, 1.9MB | 07/2017 ] Empty Quiver [ 7 Pages, 1.4MB | a/o 09/2016 ] – This keyword entry forwards to the “Nuclear Incident” page. Erehwon [ 10 Pages, 1.9MB | 07/2017 ] False Memory [ 41 Pages, 1.9MB | 09/2017 ] – This request asked for Intellipedia entries on “Aliens” and “Extraterrestrials.” The results came up with a few responsive entries mentioning these keywords. This .pdf contains the entire release. Famous Freemasons [ 15 Pages, 3.2MB | a/o 02/2017 ] Fear Approach [ 12 Pages, 1.7MB | a/o 10/2017 ] Federal Bureau of Investigation (FBI) [ 27 Pages, 16.4MB | 10/2016 ] Fictional Countries [ 10 Pages, 1.9MB | 07/2017 ] Foreign Intelligence Surveillance Act [ 28 Pages, 4.7MB | a/o 09/2016 ] Fracking [ 6 Pages, 0.3MB | a/o 08/2015 ] Frank Church [ 14 Pages, 2.8MB | a/o 07/2017 ] Freedom of Information Act (FOIA) [ 16 Pages, 3.2MB | a/o 11/2016 ] Freemasonry [ 18 Pages, 3.2MB | a/o 09/2016 ] Freemasonry “ic_masons” Chat Room [ 4 Pages, 1.4MB | a/o 11/3/2016 ] Fringe Physics (Category) [ 5 Pages, 1.5MB | a/o 12/2016 ] GeoLITE [ 5 Pages, 0.7MB | a/o 03/2016 ] Green [ 41 Pages, 1.9MB | 09/2017 ] – This request asked for Intellipedia entries on...

The post Intellipedia Entries appeared first on The Black Vault.

          VTT develops online AI maturity tool      Cache   Translate This Page   

VTT Technical Research Centre of Finland has developed a tool with which organisations can quickly and easily test their artificial intelligence (AI) readiness.

The post VTT develops online AI maturity tool appeared first on Good News from Finland.

          Machine Learning / Artificial Intelligence Scientist - Ford Motor Company - Dearborn, MI      Cache   Translate This Page   
We enable evidence based decision making by providing timely and actionable insights to our internal business partners....
From Ford Motor Company - Tue, 08 Jan 2019 14:14:09 GMT - View all Dearborn, MI jobs
          Information Design and Visualization Specialist - Booz Allen Hamilton - Washington, DC      Cache   Translate This Page   
Are you fascinated by the possibilities presented by the IoT, machine learning, and artificial intelligence advances?...
From Booz Allen Hamilton - Thu, 08 Nov 2018 16:40:47 GMT - View all Washington, DC jobs
          Siemens: Siemens DF FA System Architect for Edge & Artificial Intelligence (AI) 西门子(中国)数字化工厂AI系统架构师      Cache   Translate This Page   
Competitive: Siemens: Position Overview In this position as System Architect, you are responsible for designing deliverable system and solution, and supporting the success BEIJING, Beijing, China
          Creators Of Writing AI Say It Could Do Damage In Wrong Hands      Cache   Translate This Page   
A new artificial intelligence system is so good at composing text that the researchers behind it said they won't release it for fear of how it could be misused.
          Sex robots are raising hard questions      Cache   Translate This Page   

The robots are here. Are the “sexbots” close behind?

From the Drudge Report to The New York Times, sex robots are rapidly becoming a part of the national conversation about the future of sex and relationships. Behind the headlines, a number of companies are currently developing robots designed to provide humans with companionship and sexual pleasure–with a few already on the market.

Unlike sex toys and dolls, which are typically sold in off-the-radar shops and hidden in closets, sexbots may become mainstream. A 2017 survey suggested almost half of Americans think that having sex with robots will become a common practice within 50 years.

As a scholar of artificial intelligence, neuroscience and the law, I’m interested in the legal and policy questions that sex robots pose. How do we ensure they are safe? How will intimacy with a sex robot affect the human brain? Would sex with a childlike robot be ethical? And what exactly is a sexbot anyway?

Defining “sex robot”

There is no universally accepted definition of “sex robot.” This may not seem important, but it’s actually a serious problem for any proposal to govern–or ban–them.

The primary conundrum is how to distinguish between a sex robot and a “sexy robot.” Just because a robot is attractive to a human and can provide sexual gratification, does it deserve the label “sex robot”?

It’s tempting to define them as legislatures do sex toys, by focusing on their primary use. In Alabama, the only state that still has an outright ban on the sale of sex toys, the government targets devices “primarily for the stimulation of human genital organs.”

The problem with applying this definition to sex robots is that the latter increasingly provide much more than sex. Sex robots are not just dolls with a microchip. They will use self-learning algorithms to engage their partner’s emotions.

Consider the “Mark 1” robot, which resembles the actor Scarlett Johansson. It is regularly labeled a sex robot, yet when I interviewed its creator, Ricky Ma Tsz Hang, he was quick to clarify that Mark 1 is not intended to be a sex robot. Rather, such robots will aim to assist with all sorts of tasks, from preparing a child’s lunch to keeping an elderly relative company.

Humans, of course, can navigate both sexual and nonsexual contexts adeptly. What if a robot can do the same? How do we conceptualize and govern a robot that can switch from “play with kids” mode during the day to “play with adults” mode at night?

Thorny legal issues

In a landmark 2003 case, Lawrence v. Texas, the Supreme Court struck down Texas’s sodomy law and established what some scholars have described as a right to sexual privacy.

There is currently a split among circuit courts in how Lawrence should be applied to state restrictions on the sale of sex toys. So far, Alabama’s ban has been upheld, but I suspect that all sex toy bans will eventually be struck down. If so, it seems unlikely that states will be able to wholesale restrict sales of sex robots generally.

Bans on childlike sex robots, however, may be different.

It is not clear whether anyone in the U.S. already owns a childlike sex robot. But even the possibility of child sex robots prompted a bipartisan House bill, the Curbing Realistic Exploitative Electronic Pedophilic Robots Act, or CREEPER. Introduced in 2017, it passed unanimously six months later.

State politicians will surely follow suit, and we are likely to see many attempts to ban childlike sex robots. But it’s unclear if such bans will survive constitutional challenge.

On one hand, the Supreme Court has held that prohibitions on child pornography do not violate the First Amendment because the state has a compelling interest in curtailing the effects of child pornography on the children portrayed. Yet the Supreme Court has also held that the Child Pornography Prevention Act of 1996 was overly broad in its attempt to prohibit “child pornography that does not depict an actual child.”

Childlike sex robots are robots, not humans. Like virtual child pornography, the development of a childlike sex robot does not require interaction with any children. Yet it might also be argued that childlike sex robots would have serious detrimental effects that compel state action.

Safe and secure?

Perhaps someday sex robots will become sentient. But for now, they are products.

And a question almost entirely overlooked is how the U.S. Consumer Product Safety Commission should regulate the hazards associated with sex robots. Existing sex products are not well regulated, and this is cause for concern given the multitude of ways in which sex robots could be harmful to their users.

For example, dangers lurk even in a seemingly innocent scene where a sex robot and human hold hands and kiss. What if the sexbots’ lips were manufactured with lead paint or some other toxin? And what if the robot, with the strength of five humans, accidentally crushes the human’s finger in a display of passion?

It’s not just physical harm, but security as well. For instance, just as a human partner learns by remembering what words were soothing, and what type of touch was comforting, so too is a sex robot likely to store and process massive amounts of intimate information. What regulations are in place to ensure that this data remains private? How vulnerable will the sex robot be to hacking? Could the state use sex robots as surveillance devices for sex offenders?

Sexbots in the city

Whether and how governments regulate sex robots will depend on what we learn–or what we assume–about the effects of sexbots on individuals and society.

In 2018, the Houston City Council made headlines by enacting an ordinance to ban the operation of what would have been America’s first so-called robot “brothel.” At one of the community meetings, an attendee warned: “A business like this would destroy homes, families, finances of our neighbors and cause major community uproars in the city.”

But dire predictions like this are pure speculation. At present there is no evidence of how the introduction of sex robots would affect either individuals or society.

For instance, would a man who uses a childlike sex robot be more or less likely to harm an actual human child? Would robots be a substitute for humans in relationships or would they enhance relationships as sex toys might? Would sex robots fill a void for those who are lonely and without companions? Just as pilots use virtual flight simulators before they fly a real plane, could virgins use sex robots to safely practice sex before trying the real thing?

Put another way, there are far more unanswered questions about sex robots than there are actual sex robots. Although it’s hard to conduct empirical studies until sexbots are more prevalent, informed governance requires researchers to explore these topics urgently. Otherwise, we may see reactionary governance decisions based on supposition and fear of doomsday scenarios.

A brave new world

A fascinating question for me is how the current taboo on sex robots will ebb and flow over time.

There was a time, not so long ago, when humans attracted to the same sex felt embarrassed to make this public. Today, society is similarly ambivalent about the ethics of “digisexuality”–a phrase used to describe a number of human-technology intimate relationships. Will there be a time, not so far in the future, when humans attracted to robots will gladly announce their relationship with a machine?

No one knows the answer to this question. But I do know that sex robots are likely to be in the American market soon, and it is important to prepare for that reality. Imagining the laws governing sexbots is no longer a law professor hypothetical or science fiction.

It’s a real-world challenge that society is about to face for the first time. I hope that the law gets it right.

Francis X. Shen is Associate Professor of Law at the University of Minnesota. This story originally appeared at The Conversation.

          Canada Research Chair (Tier 1) in Imaging & Artificial Intelligence - University of Saskatchewan - Saskatoon, SK      Cache   Translate This Page   
Requisition: req3750 The College of Engineering at the University of Saskatchewan is pleased to invite applications for a tenure-track, Tier 1 Canada...
From University of Saskatchewan - Mon, 21 Jan 2019 00:19:27 GMT - View all Saskatoon, SK jobs
          Data-driven future awaits economy      Cache   Translate This Page   
In 2019, emerging technologies ranging from the Internet of Things (IoT), blockchain and artificial intelligence (AI) to multiple cloud are continuing to gain momentum, shifting from proof of concept to real business use cases.
          Automation, AI pressure factory labour market      Cache   Translate This Page   
The technological disruption caused by automated systems and artificial intelligence (AI) is pressuring the country's labour market, particularly the manufacturing sector, say business operators.
          The Obstacle Tower Challenge is live!      Cache   Translate This Page   
Three weeks ago we announced the release of the Obstacle Tower Environment, a new benchmark for Artificial Intelligence (AI) research built using our ML-Agents toolkit. One week ago we followed that up with the launch of the Obstacle Tower Challenge, a contest that offers researchers and developers the chance to compete to train the best-performing […]
          British Lawmakers Accuse Facebook Of 'Intentionally' Violating UK Privacy Laws      Cache   Translate This Page   

A new report issued by British lawmakers argues Facebook “intentionally and knowingly” violated the United Kingdom’s data privacy and competition laws.

The report, published by the U.K. Parliament’s Digital, Culture, Media and Sport Committee on Monday, said Facebook’s “handling of personal data, and its use for political campaigns, are prime and legitimate areas for inspection by regulators, and it should not be able to evade all editorial responsibility for the content shared by its users across its platforms.”

The committee began studying the role of social media platforms ― specifically Facebook ― in peddling disinformation and interfering in political elections nearly two years ago. The report called on U.K. lawmakers to create regulations to hold tech giants like Facebook accountable for the malicious spread of online information.

Lawmakers reviewed a series of internal Facebook documents they obtained late last year from Six4Three, an app developer that filed a lawsuit against Facebook in the United States in 2015. According to the committee, those documents showed Facebook was “willing to override its users’ privacy settings in order to transfer data” to other big tech companies like Netflix and Spotify. Facebook has stated that it never sold users’ data and that those companies were business partners.

The report, which is 108 pages, also said Facebook was intentionally “starving” its competitors by limiting data access to those apps, as alleged in Six4Three’s lawsuit. Parliament should investigate whether Facebook is “unfairly using its dominant market position in social media to decide which business should succeed or fail,” according to the report.

Facebook told HuffPost that the platform supports privacy legislation and denied breaking any laws.

“No other channel for political advertising is as transparent and offers the tools that we do,” said Karim Palant, Facebook’s public policy manager for the U.K. “We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.”

Facebook has also been accused of spreading false claims during the 2016 U.S. presidential election and the referendum in the U.K. on whether it should remain in the European Union.

U.K. lawmakers said the tech giant broke the law last year after meddling with Cambridge Analytica, a political consultant that wrongly accessed the data of 87 million Facebook users.

“These last 12 months really have been a period of investigation and discovery,” Damian Collins, the committee’s leader, told The Washington Post. “But I think the year ahead has to be a year of action.”

The report said that tech giants like Facebook should be legally required to follow a code of ethics and immediately remove harmful online content like disinformation and that independent regulators should be allowed to monitor and fine platforms that don’t follow the rules.

Collins told the Post he plans to make Facebook CEO Mark Zuckerberg testify before Parliament if the tech mogul ever lands in Britain. The lawmaker first shared that sentiment after Zuckerberg failed to appear at hearings multiple times last year.

This story has been updated to include a comment from Karim Palant.

          What an American artificial intelligence initiative really needs      Cache   Translate This Page   
The American AI Initiative as it stands does little to blunt the fears that America will fall behind in its technological edge. In fact, its lack of particulars sends exactly the opposite message.

Next Page: 20

Terms Privacy