Translate

Saturday, December 30, 2017

'SLAUGHTERBOTS' AND PROJECT MAVEN: DEATH BY ARTIFICIAL INTELLIGENCE













A.I., ARTIFICIAL INTELLIGENCE...EVEN STEPHEN HAWKING WARNED AGAINST IT.  Stephen Hawking warns artificial intelligence could end mankind

AUTONOMOUS WEAPONS...HANDING MACHINES THE DECISION OF WHOM TO KILL. 

 
Moral outsourcing and Artificial Intelligence. | Practical Ethics
Would you hand over a moral decision to a machine?
Killer Robots: New Reasons to Worry About Ethics - Forbes

THE U.S. MILITARY HAS BEEN 'PRACTICING' THE DEPLOYMENT OF SWARMS OF MICRO-DRONES FOR QUITE SOME TIME.
AFTER ALL, PRACTICE MAKES PERFECT, YES?
THE FOLLOWING IS A VIDEO OF THEM DOING JUST THAT.
"US Military Released Micro-Drone Swarm From FA 18 Super Hornet"

AS OF TODAY THERE IS NO KNOWN WORKABLE DEFENSE AGAINST SUCH DRONE SWARMS.
THEY CAN BE 'HARDENED' TO BE IMPERVIOUS TO ELECTROMAGNETIC PULSE.
THEY MOVE TOO QUICKLY AND WITH SUCH INCREDIBLE EVASIVE TECHNIQUES THAT THERE SIMPLY IS NO DEFENSE AGAINST THEM.

THEY ARE EASILY ARMED.

HOW EASILY?
  


How An Off-The-Shelf Drone Can Be Turned Into A Weapon | Popular Science

By 2017, drone sales in the U.S. could reach 110,000 units annually.Surveillance drones aren't commonplace in the U.S., but they will be. Last year, Congress passed legislation directing the FAA to integrate unmanned aircraft into domestic airspace by 2015. The new regulation could allow recreational users far greater freedom to roam the skies, and it will almost certainly spur demand. The Association of Unmanned Vehicle Systems International estimates that drone sales in the U.S. could reach 110,000 units annually by 2017.

"The law will do for drones what the Internet did for desktop computers," says Peter Singer, a fellow at the Brookings Institution and an expert on drones. "It will open up entire new markets."
 
DARPA, BAE Systems Arm Small Drones With New Weapons


LET US PAUSE HERE AND THINK FOR A MOMENT.
HOW EASILY HAVE BATTLEFIELD TACTICS BEEN INCORPORATED BY OUR LAW ENFORCEMENT AGENCIES INTO 'CROWD CONTROL'?

HOW MANY MILITARY WEAPONS AND VARIOUS PIECES OF MILITARY EQUIPMENT HAVE BEEN TRANSITIONED INTO USE BY AMERICAN 'LAW ENFORCEMENT'?

THERE'S EVEN A FEDERAL PROGRAM TO MAKE SUCH THINGS AVAILABLE TO OUR LOCAL SHERIFF AND POLICE DEPARTMENTS, TO S.W.A.T. TEAMS...INCLUDING DRONES.

WHAT CHANCE WOULD 'AVERAGE CITIZEN' HAVE TO COMBAT SUCH THINGS?
"A 'slaughterbot' only has to be 90 percent reliable, or even 50 percent would be fine. So the technology is already essentially feasible... it doesn't require research breakthroughs. People who say 'Oh, all this is decades in the future' either don't know what they're talking about or are deliberately saying things they don't believe to be true.

[A]ll computer security measures can be defeated, but it is still useful to have them. Geo-coding, so [a device] can't go outside the country where you bought it, for example, would be good. Because you certainly want to prevent them from being used to start wars. And the kill switch is something that the Federal Aviation Administration is talking about requiring. I don't know if they have actually done it yet, but they're talking about requiring it for all drones above a certain size in the US.

But to my knowledge, there aren't any effective countermeasures. There is a laser weapon the Navy is using that can shoot down one fixed-wing drone at a time. It seems that it has to be a fairly large fixed-wing drone, and [the laser] has to focus energy on it for quite a while to do enough damage to bring it down. But I suspect that would not be effective against very large swarms. People talk about electromagnetic pulse weapons [as countermeasures], but I think you can harden devices against that. And then we get into stuff that is classified, and I don't know anything about that. I know that [the Defense Department] has been trying for more than a decade to come up with effective defenses and I'm not aware of any."
  --Stuart Russell, a professor of computer science at the University of California, Berkeley,
in an interview with the Bulletin of Concerned Atomic Scientists.


ONE OF THE MILITARY'S NEWEST "PROJECTS"... USING A.I. TO KILL.

PROJECT MAVEN
WAS BEING DESIGNED, INITIALLY, FOR SYRIA, "TO COMBAT ISIS"... WHICH, WE'VE HAD TO RECENTLY ADMIT, IS ALL BUT COMPLETELY DEFEATED IN BOTH IRAQ AND SYRIA, SO NOW WHAT?
WELL, THE PROJECT IS STILL GOING TO BE DEPLOYED AS A REALITY OF 'NEW WARFARE TECHNIQUES'.

ADD TO THE DEFENSE DEPARTMENT'S VERY REAL, PRESENT USE OF ROBOT SWARMS, ARMED AND DEADLY, MANY THOUSANDS TO HUNDREDS OF THOUSANDS OF MINI- AND/OR MICRO-DRONES EQUIPPED WITH FACIAL RECOGNITION...DROPPED WHERE?
OVER THE MIDDLE EAST... OR OVER AMERICA?

WHAT COULD POSSIBLY GO WRONG, RIGHT?  

Maven is designed to be that pilot project, that pathfinder, that spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department.
  --Air Force Lt. Gen. Jack Shanahan, November 2017


How about a hundred Mavens?
"The good news is that Project Maven has delivered a game-changing AI capability. In doing so, the effort has demonstrated a level of technological innovation and programmatic agility that has been sorely lacking from most Defense Department digital initiatives. The bad news is that Project Maven’s success is clear proof that existing AI technology is ready to
revolutionize many national security missions—even if the department is not yet ready for the organizational, ethical, and strategic implications of that revolution.

Now that Project Maven has met the sky-high expectations of the department’s former second-ranking official, its success will likely spawn a hundred copycats throughout the military and intelligence community.


Project Maven is a crash Defense Department program that was designed to deliver AI technologies—specifically, technologies that involve deep learning neural networks—to an active combat theater within six months from when the project received funding. Most defense acquisition programs take years or even decades to reach the battlefield, but technologies developed through Project Maven have already been successfully deployed in the fight against ISIS. Despite their rapid development and deployment, these technologies are getting strong praise from their military intelligence users. For the US national security community, Project Maven’s frankly incredible success foreshadows enormous opportunities ahead—as well as enormous organizational, ethical, and strategic challenges.
 
In late April, 2017, Robert Work—then the deputy secretary of the Defense Department—wrote a
memo establishing the 'Algorithmic Warfare Cross-Functional Team', also known as Project Maven. The team had only six members to start with, but its small size belied the significance of its charter.

Project Maven—directed by Air Force Lt. Gen. Jack Shanahan and led by Marine Corps Col. Drew Cukor—was tasked with developing and fielding the first operational use of deep learning AI technologies in the defense intelligence enterprise. The Defense Department has long funded basic research and development in AI and has fielded semi-autonomous systems.

But Project Maven is the first time the Defense Department has sought to deploy deep learning and neural networks, at the level of state-of-the-art commercial AI, in department operations in a combat theater.

Before Project Maven’s creation, the Defense Department was advised by leading AI experts in industry and academia to seek out a narrowly defined, data-intensive problem where human lives weren’t at stake and occasional failures wouldn’t be disastrous. Fortunately for the team, the defense intelligence community is currently drowning in data.
Every day, US spy planes and satellites collect more raw data than the Defense Department could analyze even if its whole workforce spent their entire lives on it.

As its AI beachhead, the department chose Project Maven, which focuses on analysis of full-motion video data from tactical aerial drone platforms such as the ScanEagle and medium-altitude platforms such as the MQ-1C Gray Eagle and the MQ-9 Reaper. These drone platforms and their full-motion video sensors play a major role in the conflict against ISIS across the globe. The tactical and medium-altitude video sensors of the Scan Eagle, MQ-1C, and MQ-9 produce imagery that more or less resembles what you see on Google Earth. A single drone with these sensors produces many terabytes of data every day. Before AI was incorporated into analysis of this data, it took a team of analysts working 24 hours a day to exploit only a fraction of one drone’s sensor data.

The Defense Department spent tens of billions of dollars developing and fielding these sensors and platforms, and the capabilities they offer are remarkable. Whenever a roadside bomb detonates in Iraq, the analysts can simply
rewind the video feed to watch who planted it there, when they planted it, where they came from, and where they went. Unfortunately, most of the imagery analysis involves tedious work—people look at screens to count cars, individuals, or activities, and then type their counts into a PowerPoint presentation or Excel spreadsheet.
Worse, most of the sensor data just disappears—it’s never looked at—even though the department has been hiring analysts as fast as it can for years.

Thousands of people in the department currently work on analyzing full-motion drone video data. Plenty of higher-value analysis work will be available for these service members and contractors once low-level counting activity is fully automated. As such, Project Maven won’t exactly pay for itself through savings on salaries. Nevertheless, the benefit of automating this specific task—as well as the benefit that other Defense Department projects will derive from leveraging Maven’s AI capabilities and infrastructure—mean that Project Maven has more than justified its price tag of about $70 million.

Project Maven’s team—with the help of Defense Information Unit Experimental, an organization set up to accelerate the department’s adoption of commercial technologies—managed to attract the support of some of the top talent in the AI field (the vast majority of which lies outside the traditional defense contracting base). Figuring out how to effectively engage the tech sector on a project basis is itself a remarkable achievement.

Access to the right talent and partnerships allowed Project Maven to structure its program correctly from the outset. Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI. A traditional defense acquisition process lasts multiple years, with separate organizations defining the functions that acquisitions must perform, or handling technology development, production, or operational deployment. Each of these organizations must complete its activities before results are handed off to the next organization.

Though modern AI techniques for imagery analysis are extremely capable, developing algorithms for a specific application is not yet effortless—not just plug-and-play. Building robust, deep learning AI systems requires huge data sets with which to train the deep learning algorithm. Training data must not only be available, but categorized and labeled in advance by humans. Paradoxically, this phase of automation can be very labor-intensive. In Maven’s case, humans had to individually label more than 150,000 images in order to establish the first training data sets; the group hopes to have 1 million images in the
training data set by the end of January. Such large training data sets are needed for ensuring robust performance across the huge diversity of possible operating conditions, including different altitudes, density of tracked objects, image resolution, view angles, and so on. Throughout the Defense Department, every AI successor to Project Maven will need a strategy for acquiring and labeling a large training data set.

Once labeled data is ready, the algorithmic training process makes extremely intensive computational demands. Traditional IT infrastructure is practically useless for such computations. Many leading commercial tech companies have gone so far as to develop
their own custom processors and cloud infrastructure networks to run AI computations. The department has spent years, and billions of dollars, trying to migrate its digital activity into the cloud, but none of that infrastructure was built with requirements for AI training and inference computation in mind. Project Maven had to build its own AI-ready infrastructure, including computing clusters for graphics processing, from scratch. Fortunately, some of this capability can be leveraged for future algorithm training on other department projects.

Even before the final versions of Project Maven’s labeled data set and computational infrastructure were ready, the alpha and beta versions were used to develop algorithms that were shared with the user community to get feedback. Maven’s team heard from users with full-motion video know-how in the specific context of counter-ISIS operations in the Middle East. From their users, Maven’s developers found out quickly when they were headed down the wrong track—and could correct course. Only this approach could have provided a high-quality, field-ready capability in the six months between the start of the project’s funding and the operational use of its output.

IT TOOK ONLY SIX MONTHS TO MAKE IT A REALITY.

In early December, just over six months from the start of the project, Maven’s first algorithms were fielded to defense intelligence analysts to support real drone missions in the fight against ISIS."


IN 2013, THE ATLANTIC DID AN ARTICLE ON
DRONES: THE KILLING MACHINES.
"The drone is effective. Its extraordinary precision makes it an advance in humanitarian warfare. In theory, when used with principled restraint, it is the perfect counterterrorism weapon. It targets indiscriminate killers with exquisite discrimination. But because its aim can never be perfect, can only be as good as the intelligence that guides it, sometimes it kills the wrong people—and even when it doesn’t, its cold efficiency is literally inhuman.

TARGETS CAN BE WATCHED FOR MONTHS....JUST AS ALL OF US ARE WATCHED DAILY.

Surveillance technology allows for more than just looking: computers store these moving images so that analysts can dial back to a particular time and place and zero in, or mark certain individuals and vehicles and instruct the machines to track them over time. A suspected terrorist-cell leader or bomb maker, say, can be watched for months. The computer can then instantly draw maps showing patterns of movement: where the target went, when there were visitors or deliveries to his home. If you were watched in this way over a period of time, the data could not just draw a portrait of your daily routine, but identify everyone with whom you associate. Add to this cellphone, text, and e-mail intercepts, and you begin to see how special-ops units in Iraq and Afghanistan can, after a single nighttime arrest, round up entire networks before dawn.

When armed, [a drone] becomes a remarkable, highly specialized tool: a weapon that employs simple physics to launch a missile with lethal force from a distance, a first step into a world where going to war does not mean fielding an army, or putting any of your own soldiers, sailors, or pilots at risk.

When [Obama] took office, he inherited a drone war that was already expanding. There were 53 known strikes inside Pakistan in 2009 (according to numbers assembled from press reports by
The Long War Journal), up from 35 in 2008, and just five the year before that. In 2010, the annual total more than doubled, to 117.

As U.S. intelligence analysis improved, the number of targets proliferated. Even some of the program’s supporters feared it was growing out of control. The definition of a legitimate target and the methods employed to track such a target were increasingly suspect. Relying on other countries’ intelligence agencies for help, the U.S. was sometimes manipulated into striking people who it believed were terrorist leaders but who may not have been.

DEATH BY REMOTE CONTROL BECAME TOO EASY, TOO TEMPTING.
Michael Morrell, who was the deputy director of the CIA until June, was among those in the U.S. government who argued for more restraint. During meetings with John Brennan, who was Obama’s counterterrorism adviser until taking over as the CIA director last spring, Morrell said he worried that the prevailing goal seemed to be using drones as artillery, striking anyone who could be squeezed into the definition of a terrorist—an approach derisively called “Whack-A-Mole.” Morrell insisted that if the purpose of the drone program was to diminish al-Qaeda and protect the United States from terror attacks, then indiscriminate strikes were counterproductive.

Another routinely skeptical and cautious participant was James Steinberg, the deputy secretary of state for the first two and a half years of Obama’s first term, who adhered to a strict list of acceptable legal criteria drawn up by the State Department’s counsel, Harold Koh. This criteria stipulated that any drone target would have to be a “senior member” of al‑Qaeda who was “externally focused”—that is, actively plotting attacks on America or on American citizens or armed forces.

Koh was confident that even if his criteria did not meet all the broader concerns of human-rights activists, they would support an international-law claim of self-defense—and for that reason he thought the administration ought to make the criteria public. Throughout Obama’s first term, members of the administration argued about how much of the deliberation process to reveal. During these debates, Koh’s position on complete disclosure was dismissively termed “the Full Harold.” He was its only advocate.

MILITARY AND CIA PUSH FOR MORE KILLS.

The military and the CIA pushed back hard against Koh’s strict criteria. Special Forces commanders, in particular, abhorred what they saw as excessive efforts to “litigate” their war. The price of every target the White House rejected, military commanders said, was paid in American lives. Their arguments, coming from the war’s front line, carried significant weight.

Cameron Munter, a veteran diplomat who was the U.S. ambassador to Pakistan from 2010 to 2012, felt that weight firsthand when he tried to push back. Munter saw American influence declining with nearly every strike.
Matters came to a head in the summer of 2011 during a meeting to which Munter was linked digitally.

Concerned about balancing the short-term benefits of strikes (removing potential enemies from the battlefield) and their long-term costs (creating a lasting mistrust and resentment that undercut the policy goal of stability and peace in the region), Munter decided to test what he believed was his authority to halt a strike.
As he recalled it later, the move played out as follows:

Asked whether he was on board with a particular strike, he said no.

Leon Panetta, the CIA director, said the ambassador had no veto power; these were intelligence decisions.

Munter proceeded to explain that under Title 22 of the U.S. Code of Federal Regulations, the president gives the authority to carry out U.S. policy in a foreign country to his ambassador, delegated through the secretary of state. That means no American policy should be carried out in any country without the ambassador’s approval.

Taken aback, Panetta replied, “Well, I do not work for you, buddy.”

“I don’t work for you,” Munter told him.

Then Secretary of State Hillary Clinton stepped in: “Leon, you are wrong.”

Panetta said, flatly, “Hillary, you’re wrong.”

At that point, the discussion moved on. When the secretary of state and the CIA director clash, the decision gets made upstairs.

Panetta won. A week later, James Steinberg called Munter to inform him that he did not have the authority to veto a drone strike. Steinberg explained that the ambassador would be allowed to express an objection to a strike, and that a mechanism would be put in place to make sure his objection was registered—but the decision to clear or reject a strike would be made higher up the chain. It was a clear victory for the CIA.

HOW MANY UNTARGETED CIVILIANS ARE KILLED BY DRONES? 
The true numbers are unknowable.

Secrecy is a big part of the problem. The government doesn’t even acknowledge most attacks, much less release details of their aftermath. The Bureau of Investigative Journalism, a left-wing organization based in London, has made a strenuous effort, using news sources, to count bodies after CIA drone strikes. It estimates that from 2004 through the first half of 2013, 371 drone strikes in Pakistan killed between 2,564 and 3,567 people (the range covers the minimum to the maximum credible reported deaths).

When Bush branded our effort against al-Qaeda “war,” he effectively established legal protection for targeted killing.
Once the “war” on al-Qaeda ends, the justification for targeted killing will become tenuous. Some experts on international law say it will become simply illegal. Indeed, one basis for condemning the drone war has been that the pursuit of al‑Qaeda was never a real war in the first place.

WHO DECIDES WHO DIES BY DRONE? 
THE CIA.  

IT'S THEIR DATA, THAT MYTHICAL "INTELLIGENCE" AS REPORTED TO THOSE IN WASHINGTON BY THE CIA WHICH DETERMINES WHO DIES.

WELL, REMEMBER THEIR "INTELLIGENCE"ON SADDAM HUSSEIN'S 'WEAPONS OF MASS DESTRUCTION', WHICH WE NEVER LOCATED?

THE CIA IS, SUPPOSEDLY, OBLIGATED TO PROVIDE ONLY FACTUAL, PROVABLE DATA BY WHICH DRONE STRIKES ARE DETERMINED.
THAT IS NOT ALWAYS THE CASE.


Philip Alston, a former United Nations special rapporteur on extrajudicial, summary, or arbitrary executions, concedes that al-Qaeda’s scope and menace transcend criminality, but nevertheless faults the U.S. drone program for lacking due process and transparency.
He
told Harper’s magazine:
"The CIA’s response to these obligations has been very revealing. On the one hand, its spokespersons have confirmed the total secrecy and thus unaccountability of the program by insisting that they can neither confirm nor deny that it even exists. On the other hand, they have gone to great lengths to issue unattributable assurances, widely quoted in the media, both that there is extensive domestic accountability and that civilian casualties have been minimal. In essence, it’s a ‘you can trust us’ response, from an agency with a less than stellar track record in such matters.

“Outside of the context of armed conflict, the use of drones for targeted killing is almost never likely to be legal,” Alston wrote in 2010.
Mary Ellen O’Connell agrees. “Outside of a combat zone or a battlefield, the use of military force is not lawful,” she told me.

HANG ON!
WE'VE ALREADY SEEN DRONES USED IN AMERICA THAT TARGETED AMERICAN CITIZENS, MAYBE NOT WITH 'DEADLY FORCE'...YET.
Police to Use Drones for Spying on Citizens | US News

Court Upholds Domestic Drone Use in Arrest of American Citizen


WE'VE SEEN 'ACCIDENTS' INVOLVING DRONES.
When drones fall from the sky | The Washington Post

12 drone disasters that show why the FAA hates drones - TechRepublic

ONE EVEN CRASHED NEAR THE WHITE HOUSE.
FAA Says Nearly 600 Drone Incidents Occurred Over Past 6 Months
Civilian drone crashes into Army helicopter | New York Post

WE'VE SEEN THE BORDER PATROL, HOMELAND SECURITY, EVEN SMUGGLERS USING DRONES.

AND THEN WE HAVE THOSE TWO TERMS COINED DURING THE BUSH #2 REGIME THAT SHOULD SEND CHILLS DOWN ALL OUR SPINES: "HOMEGROWN TERRORISTS" AND "POTENTIAL TERRORISTS".

AS I WROTE HERE BEFORE, BOTH THOSE TERMS HAVE VERY BROAD AND FUZZY DEFINITIONS, CAN BE 'INTERPRETED' JUST ABOUT ANY WAY SOME ENTITY LIKE THE 'FISA COURT' CARES TO INTERPRET THEM, AND THEREIN LIES THE PROBLEM WE MAY FACE SOONER THAN WE EVER DREAMED.

BY THE WAY, FOR A FAIR READ ON THE FISA ACT AND WHY CONGRESS SCREWED US ON THAT SEE [THIS ARTICLE BY THE COUNSELON FOREIGN RELATIONS].

SINCE WHO GETS KILLED BY DRONE IS DETERMINED BEHIND CLOSED AND LOCKED DOORS BY A SMALL UNKNOWN GROUP OF PEOPLE, FED BY CIA 'INTELLIGENCE', WHO WOULD EVER KNOW IF THERE WAS VALID JUSTIFICATION OF ANY STRIKE ON ANY AMERICAN IN THE FUTURE?
NO ONE.
NO ONE WOULD EVER KNOW.
AND NO ONE COULD EVER HANG SUCH A KILLING ON ANYBODY.

WHAT IF ONE 'SIDE', LEFT OR RIGHT, DEMOCRATS OR REPUBLICANS, SIMPLY DECIDED ONE DAY UP AT THE ASYLUM IN D.C. THAT THE 'OTHER SIDE' HAD TO GO?
WHAT IF SOME LUNATIC, SOME ROGUE CIA AGENT, PENTAGON OFFICIAL, ANYBODY JUST DECIDED IT WAS TIME TO DO A LITTLE HOUSE CLEANING AND UNLEASHED THOUSANDS, HUNDREDS OF THOUSANDS OF DRONES ON CERTAIN AMERICANS?


WHAT IF "ENEMY AGENTS" DECIDED TO HIJACK OUR DRONES, OR BROUGHT IN OR DROPPED IN THEIR OWN?

THERE'S NO DEFENSE AGAINST SUCH A THING.

REMEMBER THAT HOLLYWOOD HIT, "THE TERMINATOR"?
WE'RE ALMOST THERE.

ORWELL'S 'BIG BROTHER' HAS BEEN WATCHING US FOR DECADES NOW AS WE BECAME THE CLOSEST-WATCHED CITIZENS OF ANY NATION ON EARTH.
ORWELL COULDN'T HAVE FORSEEN SUCH TECHNOLOGY OR HE'D HAVE MADE THOSE BEHIND "BIG BROTHER"  MACHINES.

REMEMBER, MAN THOUGHT HE COULD HARNESS THE ATOM, BUT INSTEAD BECAME A SLAVE AND A VICTIM TO IT.
ONCE UNLEASHED, RADIATION IS A VERY INDISCRIMINATE KILLER.

DRONES, 'SLAUGHTERBOTS' WILL BE MORE CHOOSY...UNTIL ALL THAT ARE HUMAN BECOME THE TARGETS.

 IT'S SOMETHING TO THINK ABOUT, PEOPLE OF EARTH.
BUT IT IS NOT SOMETHING WE WILL EVER LIKELY HAVE ANY SAY ABOUT.
AFTER ALL, WE DON'T RUN THIS ASYLUM ANYMORE. 
WE GAVE AWAY THAT POWER TO CORRUPT MEN WHOM WE CALL POLITICIANS AND TO AGENCIES LIKE THE CIA, FEMA AND OUR ACCURSED 'SUPREME COURT'.


GOOD LUCK, CITIZENS.
YOU'RE GOING TO NEED IT.








//WW