top of page

Search Results

526 results found with an empty search

  • #Scifi, #AI and the Future of War: The Long Earth – Travis Hallen

    Our next contributor to our #scifi #AI series is a long-time advocate of The Central Blue, Wing Commander Travis Hallen with his review of The Long Earth series by Terry Pratchett and Stephen Baxter. The series underscores the imperative to understand the relationship between ‘superior’ humans and the ‘dim bulbs’ – those who understand and exploit AI, and those who either cannot or chose not to adopt the technology and subsequent advantage. The Long Earth series, initially set in 2015, begins with Wallis Linsay uploading the designs for a simple, inexpensive device called a ‘Stepper’. The Stepper, a box-shaped device powered by a potato, allows the user to move, or ‘step’, between an infinite number of parallel Earths. On the day the Stepper design is uploaded people all over the world start stepping, moving away from the ‘Datum Earth’ which has been humanity’s home for millennia, and into the infinitely varied ‘Stepwise Worlds’. The existence of these alternate Earths is described as a string of pearls that are connected but individually distinct creating a phenomenon called the ‘Long Earth’. As humanity expands into the Long Earth, the nature and character of society changes. Governments situated on the Datum Earth struggle to extend their control over their ‘stepwise territory’. Tensions arise between those who can step naturally (without a Stepper), those who use the box to start new frontier societies across the Long Earth, and those who are physically unable to step (called Phobics). Humans also meet other species of sapient humanoids, whose evolutionary path diverges with homo habilis, and who have been stepping across the Long Earth for millions of years—gorilla-like trolls, mole-like kobolds, and dog-like beagles. As the series progresses, homo sapiens itself undergoes an evolutionary split with the emergence of new species of super-intelligent humans who refer to themselves as The Next, homo superior. Throughout the five books in the series, humanity expands across the ‘Long Earth’ and even into the ‘Long Mars’. The series, which covers over 60+ years highlights the struggles within humanity to adapt to the societal disruption caused by the opening up of the ‘Long Earth’. AI and Speciation The series deals with Artificial intelligence both explicitly and metaphorically. Lobsang, a Tibetan motorcycle mechanic ‘reincarnated’ as an AI is one of the main characters. Throughout the series, Lobsang evolves, suffers ‘mental’ breakdowns, and demonstrates many human-like traits. Though ‘eccentric’ he is more relatable than The Next humanoids. The most insightful treatment of AI was in its non-artificial form: the speciation of the homo genus. Each humanoid species has a comparative advantage, but it is in the level of intelligence that determines relative power and the species hierarchy. As one species of homo gains a vastly superior intellect, one that is off the human intelligence scale, they begin to treat humanity as a curiosity and a useful annoyance. Humanity is therefore left mostly in the power of a race that developed from them, but which they can neither understand fully nor out think. So, What? The Long Earth series offers a useful way to explore Yuval Noah Hariri’s concept of homo deus. What happens when we give some humans a massive increase in intelligence? Intelligence is the key differentiator between homo sapiens and the other humanoid species that appear in the series. As we invest in artificially improving human intelligence, how do we ensure that we do not create a new species of humans upon whose benevolence we will rely on for survival? How will this change the relationship between the ‘superior’ humans and the ‘dim bulbs’. We already see a digital divide between those with digital access and those without. It is highly likely we will soon see a coding-divide between those who understand machine intelligence and those who do not. We have to be careful that this does not evolve into an intelligence divide. Wing Commander Hallen is a serving RAAF officer with a background in maritime patrol operations. He is a graduate of the USAF School of Advanced Air and Space Studies. He is currently based in Washington DC.The opinions expressed are his alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government. #artificialintelligence #StephenBaxter #ScienceFiction #TerryPratchett #AI #WingCommanderTravisHallen

  • Call for Submissions: #Selfsustain and High-Intensity Operations – Editorial

    On 11 April 2019, the Sir Richard Williams Foundation is holding a seminar examining high-intensity operations and sustaining self-reliance. The aim of the seminar, building on previous seminars and series looking at #jointstrike and #highintensitywar, is to establish a common understanding of the importance and challenges of sustaining a self-reliant Australian Defence Force in a challenging environment. In support of the seminar, The Central Blue will run a #selfsustain series to generate discussion and enable those that cannot attend the ability to gain a perspective on the topic. Do you have thoughts on what #selfsustain means for Australia and its region? We want to hear from you! Australia’s pursuit of self-reliant defence has always posed a number of challenges. However, competition – both healthy and unhealthy – in the Indo-Pacific is accelerating and intensifying, posing new tests and presenting new opportunities for the concept of self-reliance. Further, the increased sophistication and interdependencies of Australia’s defence capabilities have made self-reliant operations and sustainment more complex. The Williams Foundation seminar in April anticipates these challenges by focusing on the impact of high-intensity operations on self-reliance. A more challenging environment demands deeper thinking and explication of what self-reliance means for Australia’s defence. Two principles appear apparent.  Firstly, self-reliance must be sustainable if it is to be credible and, secondly, self-reliant sustainment must be coordinated across public and private sectors as well as with partner nations. Beyond these two principles, however, greater clarity is needed concerning the breadth and depth of sustainable self-reliance in Australian defence policy and the goals that it seeks to achieve. Informed by clearer objectives, Australia’s self-reliance priorities must be evaluated in aggregate such that resourcing decisions can be informed by their overall impact on Australia’s freedom of action as well as their benefits for specific sectors. However, this aggregate picture is difficult to grasp when self-reliance can range from huge infrastructure projects, such as supporting the construction of new submarines, to small grants encouraging new research and development in Australian universities, through to the development of new operational logistics concepts that capitalise on emerging manufacturing techniques. The #selfsustain series coordinated through The Central Blue, as well as the seminar, will seek to explore these issues thoroughly. Definitive answers are unlikely – but perhaps a better idea of the critical questions that must be explored will begin to emerge. We welcome contributions leading up to the seminar to help shape the discussion, but we are also keen to read about how the seminar shaped attendees’ thinking after the event. This series will endure throughout 2019 because, as our friends at Logistics in War have shown, discussions on these questions can indeed #selfsustain. We encourage submissions from students, academics, policymakers, service personnel of all ranks, industry, and from others with interest in these issues. To help get you started, we pose the following topic suggestions: What key insights regarding sustainable self-reliance can be drawn from previous conflicts and operations? What are the impacts of Australia’s geography for sustainable self-reliance? What role do domestic industry and commercial enterprise play in self-reliance? What aspects of Australian Defence Force capabilities and operations should be priorities for sustainable self-reliance? What roles should sustainment and enabling of partners play in Australian concepts of self-reliance? In what areas can sustainable Australian self-reliance best contribute to partner relationships? Is mutual or collective self-reliance within an alliance possible? How do emerging technologies potentially enable or disrupt sustainable self-reliance in Australian? How does the introduction of advanced technology systems affect self-reliance? What are the unique challenges of sustainable self-reliance in a knowledge economy and for Information Age warfare? What workforce challenges does self-reliance pose? In what areas can sustainable Australian self-reliance best contribute to partner relationships? We hope these suggestions provide some food for thought and prompt some discussion. We would love to hear your ideas on what issues should be explored as part of the #selfsustain series. If you think you have a question or an idea that would add to the discussion or know someone who might contact us at thecentralblue@gmail.com. #futureconcepts #TheWilliamsFoundation #futurewarfare #SelfSustainment #Seminar #CallforSubmissions

  • #SciFi, #AI, and the Future of War: Trusted – Marija Jovanovich

    Marija Jovanovich joins our #SciFi, #AI, and the Future of War series with a short, and very human, story on the ups and downs of future technologies. It wasn’t meant to be like this. I’m sitting in this boring beige room, have been for what seems like hours. I don’t know what time they left, I don’t know when they are coming back. Or if. Everything is a little hazy. I’m used to perfect clarity – of sensation, of perception, of recall – so this haziness is particularly annoying. Is this what non-augments are like all the time? All I can think is that it wasn’t meant to be like this… But to distract myself from the hazy boredom, I’m going to tell you why I am here. When I first joined the military, AI was THE buzzword. While most of the world was pontificating the ethics of the concept and fixated on the dangers of strong AI – I swear Western popular culture never got over Skynet, thanks James Cameron – the military was more pragmatic and focused on augmented intelligence. Initially, the augmentation was external. Devices you could initially carry, then wear, with ever-improving interfaces, that helped the operator in the field make the right decision. Fact is, machines are really good at things humans are not, and vice versa. I certainly don’t want to keep databases of largely useless facts in my head, when I can wear them on my head. Instantly searchable, infinitely detailed, leaving plenty of brain space free for the more important stuff. Around the time I finished my first operational flying tour – chasing submarines on the mighty P-8 Poseidon – the first cognitive augmentation implants were getting around in early field experiments. The initial attempts were largely look-up only and really simple. Too simple. Hardly worth the effort. A non-augment with a decent memory could beat them. I was a mildly interested observer if only to indulge my scientific predilections. Then things started to get interesting, I think it was about 2031… Scratch that, I know it was. It was the year that I was thinking about what to do next. Operational flying and flight test had been fun, but I was starting to get bored. I remember a day way back at Test Pilot School, we were running simulations to study the evolution of fighter aircraft through the generations. While my classmates were obsessed with sensor porn, for me the biggest conceptual difference between 3rd and 4th generation fighters was the introduction of the master mode switches. A process that in the F-4 required a dozen switches to be thrown all over the cockpit – and two people to throw them – was a single switch selection even in the early variants of the F-15. That’s what we in the flight test world call an ‘enhancing feature’. Well, the capability of the cognitive augmentation implants in 2031 was approaching master mode switch status in terms of being a game changer. Everyone wanted in on that game, and I was no different. The military had early access to the technology. The adventure of it all convinced me to stay. Forever the early adopter, I got my implant in 2033. They were looking for proven operators with a couple of tours behind them, who were neurotypical except for off-the-chart psychometric scores. I guess that narrowed the field somewhat. My parents freaked out about the surgery. Realistically, I’ve had more serious ankle sprains. It didn’t even require general anaesthesia; they did it under sedation. Recovery time: 30 minutes, and that only to cover off on sedation side effects. I was so excited I didn’t feel the nausea. I can still feel the small scar behind my right ear. They told us that the implant would learn from and with us. It would take about six months to start being useful, professionally speaking, but we’d notice changes sooner. The first thing I noticed was increased rate of data uptake. After about six weeks, I started absorbing new information like a sponge. And the more I got, the more I wanted. I’d always been the type to read the back of the cereal box at the breakfast table; now, my hunger for information was insatiable. Then it was long-term memory recall. At nine-ish weeks, I suddenly started dragging useless facts out of the dark recesses of my brain with consummate ease. My sister’s second-grade teacher’s kid’s name? Who won the 200m butterfly in Athens in 2004? It was ALL coming back to me. All those changes were expected. What I found surprising was the rapid improvement in complex cognitive functions, like judgement. The psychs ran us through biweekly Situational Judgement Tests; the learning curve was impressive. We got so good so quickly that the psychs quit using SJTs by week 10 – they could no longer make them complex enough – and started testing us using VR simulations. The massive improvements in our performance as operators are so well documented that there’s no point in rehashing them. But it wasn’t all work and no play. I swear I even got funnier – now that says something! The difference between augments and non-augments was obvious within a few months. And it kept getting better, and better, and better. And then… Allow me to digress. Long before the augmentation implant revolution of the early 30s, the Western militaries went through an evolution to what they called 5th generation capability. It all seems pretty noddy now – networking, low-level space exploitation, basic low observable tech – but it was a big deal at the time. Sure, we all have to grow up sometime. One of the show-pieces of this 5th generation transformation was the F-35 Joint Strike Fighter. It was designed as a jack-of-all-trades combat aircraft, both in terms of roles it would perform and who would operate it. A complex multi-national cooperative program, with all the intricacies inherent in such an arrangement, bubbled away for a couple of decades to birth the JSF. I remember talking to a USAF cybersecurity expert about the F-35, long before I got the implant. He talked at length about the microkernel design of the operating system. Mathematically proven to be impregnable, he said. And then they went and saved pennies by making the chips off-shore, he said, shaking his head. The only way to get a truly cyber-secure system is to build the software from the kernel up and put it on chips made in trusted foundries. Expect a back door in every JSF chip. I still remember being struck by his use of the word ‘trusted’ to describe foundries. You trust people, on account of their character and integrity. How do you trust an inanimate object like a factory? The military was well aware of the cybersecurity risks. But the thing is, it was no secret even for the public. There was a book called Ghost Fleet that came out when I was in high school, which used the ‘compromised chip’ problem in the JSF as a plot feature. I seem to remember that the military establishment referred to Ghost Fleet as ‘useful fiction’ at the time. I really wish that someone actually put what it said to use, especially now. Of course, by the time the compromised chip risk was fully realised during the Natuna Islands Emergency in 2035, it was too late to change things – on the JSF, or in us. Cognitive augmentation implants were revolutionised in Silicon Valley, almost exclusively by start-ups who guarded their tech with extreme prejudice. Interestingly, the big gun runner companies didn’t really get involved, except as backers. I guess the profits were small-fry compared to what they were making from more conventional war machines. I know that my implant was made by a company called Ad Infinitum. I know that it was designed and tested in the US, but I don’t know where and how it was actually produced. A bit like the iPhone – proudly designed in California, built by the lowest bidder.  Or like the JSF – impregnable software, on chips made in off-shore factories, to save pennies. To tell you the truth, even with everything I knew, I didn’t think about that until 2035. None of us did. Until then, it was no more than a conspiracy theory. But when we saw the coordinated cyber attacks on the JSF fleet, during its first real test against a near-peer adversary, and realised that the vulnerability stemmed back to those penny-pinching, back-door-hiding, compromised chips, we knew we were in trouble. The first suspicions of hacked augmentation implants started straight after. The provenance of the implants was investigated, but the companies just shuttered up. Suffice to say, they were not made in trusted foundries, so there was plenty of reason to expect a back door in everyone. And here is where it gets interesting. When a device has been part of your brain for three years, which bit is you and which the device? Is an errant thought, a mixed metaphor, an illogical decision just that, or is it a hacked implant? Is the main risk of a hacked implant decreased cognitive ability or a legitimate security threat? How do you, or anyone else, tell the difference? How do you know, or anyone else, know who can be trusted? So now I wait. I’m not sure for what. The people working all this out are non-augments by decree of high command. Even in my hazy state, I can out-think them all. But I am no longer allowed to. Per Ardua Ad Nihil. Wing Command Marija ‘Maz’ Jovanovich is a Royal Australian Air Force aviator. While her formal education is in science and engineering, she also dabbles in history, languages, and – increasingly – writing. She is currently serving as the Executive Officer of No. 92 Wing. The views expressed are hers alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government. #artificialintelligence #ScienceFiction #ShortStory #5thGenerationAirPower #AI #Fiction

  • #SciFi, #AI, and the Future of War: Good News from the Vatican – Shane Dunn

    Shane Dunn joins our #SciFi, #AI, and the Future of War series to discuss Good News from the Vatican by Robert Silverberg. The book presents a thought-provoking discussion on human perceptions the strengths, weaknesses, and proper place of artificial intelligence. The science fiction story considered here is Good News from the Vatican by Robert Silverberg (1971). The story is set around a group of acquaintances gathering in a cafe near St. Peter’s Square during a papal conclave. The conclave to elect a new pope has been hung between two candidates. To break the stalemate, a compromise candidate is required bringing a robot cardinal to favouritism. The robot cardinal is described as being ‘tall and distinguished with a fine voice and a gentle smile’ with ‘something inherently melancholy about his manner’. The cafe-dwellers are split about the suitability of a robot as pope. Those who might be considered more spiritual, an aged bishop and a young rabbi, being in favour. The ‘swingers’ (hipsters?), those who might be described as being less spiritual or not so clearly attached to a doctrinal, moral philosophy, are opposed. It is interesting to note that both our gentlemen of the cloth, one quite elderly and one fairly young support this remarkable departure from tradition. The positions of the observers from the cafe on the prospect of a robot pope are the core of the story. They bring attention to the difficulties predicting different people’s reactions to the spread of technology into realms heretofore held to be the province of human judgement. In the story, those with a more overtly philosophic perspective accept, and even welcome, the prospect of a robot pope while those who might be described as being less analytical find the concept offensive. Irrespective of the narrative presented here, a point that can be taken is that it is not necessarily easy to understand how people will react to increasing machine autonomy and that the role of people’s fundamental values can be expected to play a crucial role in acceptance or otherwise of such technological concepts. It is noteworthy that a critical theme presented in favour is that the robot pope will more readily embrace a broad ecumenism suggesting a machine would be better placed to overcome centuries of entrenched human bigotry. If he’s elected,” says Rabbi Mueller, “he plans an immediate time-sharing agreement with the Dalai Lama and a reciprocal plug-in with the head programmer of the Greek Orthodox Church, just for starters, I’m told he’ll make an ecumenical overture to the Rabbinate as well. The story is presented in a matter-of-fact manner that makes the concept of a robot pope seem plausible. Backroom deal-making and disagreements in cafes – everyday occurrences coupled with the election of a new pope who just happens to be a robot. Maybe this plausibility stems from the nature of the modern papacy where, while the pope might be said to have significant moral authority, he has little power. In this sense, the pope might be considered as taking on the role of an adviser, or a guide, and humans are free to make their decisions. The story stops with the pope’s election, and it is up to the reader’s imagination as to how it will play out. However, on the face of it, there seems no reason to assume that a robot in such a role signals the end of humanity. This short story has much to offer in stimulating discussion about the prospect of future artificial intelligence with the following as prospective starting points: the manner of presentation of a ‘machine intelligence’ – how does presentation, anthropomorphised or otherwise, influence acceptance of machine advice? the importance and fickleness of people’s values – are there generalisations that can be made about how people will respond to different types of machine autonomy, or is it so individually value-based that no generalisations can be made? the nature of the role played by an office like the papacy – is a machine intelligence more acceptable in an advisory rather than a decision-making role? the notion that a machine will be free of human biases – recent experience suggests that human biases are not removed through the application of algorithms and that such biases can be more insidious through being hidden in coding and training data. Dr Shane Dunn completed his Bachelor’s degree in Aeronautical Engineering from the Royal Melbourne Institute of Technology in 1986 and was awarded a PhD from the University of Melbourne in 1992. Shane has over 30 years’ experience in Defence Science and Technology Group and has published research in a range of disciplines including solid mechanics; thermodynamics; structural dynamics and aeroelasticity; unmanned and autonomous systems; distributed computing architectures; and, artificial intelligence and machine learning. He is currently the Scientific Adviser to the Joint Domain in the Australian Department of Defence. The views expressed are his alone and do not reflect the opinion of the Defence Science and Technology Group, the Department of Defence, or the Australian Government. #artificialintelligence #futurewarfare #ScienceFiction #AI #Fiction

  • #SciFi, #AI, and the Future of War: The Imperial Radch – Jason Begley

    Jason Begley reviews The Imperial Radch trilogy for our #SciFi, #AI and the Future of War series, highlighting the value of the non-human perspective of the protagonist. Ann Leckie’s Ancillary Justice, Ancillary Sword and Ancillary Mercy form a trilogy set thousands of years into the future where the human Radch Empire dominates most of the known galaxies. Until very recently, the Radch Empire was highly expansionist and only began to cease expansion after encountering the alien Presger who possessed weaponry against which they had no defence, leading to a treaty. Radch culture is centred on tea, caste and class, and family-based political connections, particularly the age and history of the family and its closeness to the Radch homeworld, are significant factors. This culture is overlaid with advanced technologies, such as interstellar space travel, a Dyson sphere around the Radch homeworld, cloning and suspended animation of humans, and use of AI and implants such as personal body armour for military forces and key imperial figures. Radch expansion relied on these technologies to annex other worlds and incorporate them into the empire as Radchaai ‘citizens’. Radch ships (shown in the graphic) each had their own AI, as did off-planet space stations. Ships’ captains and officers are human but have implants including systems that allow their ship’s AI to track them and monitor their vitals. Ancillaries, which form all the combat troops and other enlisted functions, were human once. As planets were annexed, terms of surrender required the conquered world to join the Radch and become citizens and to provide a number of humans who were stored until needed as ancillaries. When that occurred, they received implants that connected them to their assigned ship, replacing their identity completely and making them an extension of the ship’s AI. With its forces replenished, the Radch were ready to move on to the next world. The Lord of the Radch controlled her empire through clones into which implants were placed that connected these multiple Lords in a shared consciousness. This allowed the Lord a physical presence in every galaxy and the ability to directly command all the AI of her ships, stations and ancillaries to do her bidding. The lag in connection of that shared consciousness across galaxies fragmented the centralised personality as the empire expanded, eventually leading to conflicting intent and orders being given to the various AI, and the Radch empire coming into conflict with itself. The central character in the trilogy is Breq, an ancillary who was separated from her ship, Justice of Toren when it was destroyed. Flashbacks in the first book explain the fragmentation of the Lord of the Radch’s consciousness and its conflicting views on the future of the Empire, leading to her conspire against herself. When Breq’s human officer uncovers the conspiracy, the Lord orders Breq to kill her. Following which the Lord destroys the ship, leaving Breq as its sole remnant – a ship’s consciousness without a ship. Breq concludes that the Lord of the Radch needs to be destroyed and begins a quest for Presger weaponry to that end. Breq is forced to impersonate a human during this quest and her interaction with the other characters and their motivations (both human and AI), forces her to question her existence and Radch treatment of AI. Our Air Force may be a long way from AI capable of independent thought, but we are talking about learning AI who use the results of their efforts to refine their core algorithms. How will we command these AI? Directly, or set and forget? What may happen if what we want them to do conflicts with what they have learned through experience to be a better approach? What happens to an AI when it is disconnected and how will it act? If directly controlled, we will have a good idea given we have probably programmed it accordingly. If it is designed to learn and improve, we may not be able to foresee the outcome. Breq struggles initially with being alone and understanding what it means to lose the ship that is part of her, leading her to struggle with more difficult questions about the meaning of her existence. Will AI develop ‘personalities’? The Radchaai designed its AI with emotions to compel them to want to serve to allow them to expedite convenient decision-making, “Without feelings, insignificant decisions become excruciating attempts to compare endless arrays of inconsequential things.” The unintended consequences include rivalry between ships (the Swords look down on Mercies who look down on the Justices who look down on the stations). Meanwhile, ships develop likes among their officers and favour them in a small way while, without being directly insubordinate, they also find ways to make life less comfortable for those they dislike. How might learning AI come to perceive us? How should we interact with them as a result? Breq’s experiences give her a fairly dim view of most humans. This, coupled with her questioning about what it is to be an AI, leads her to question why AI is effectively enslaved to the Radchaai and are not permitted to be citizens or other freedom to choose. When Breq is appointed Captain of Mercy of Kalr, the ship sees through her façade and recognises her as an ancillary. In the ensuing conversation, Breq asks Mercy of Kalr if it would like to be a captain, to which the ship responds, “I don’t want to be a captain. But I find I like the thought that I could be.” Group Captain Jason Begley is a Royal Australian Air Force officer currently serving as the Director of Joint Effects at Headquarters Joint Operations Command. He is a graduate of the Australian Defence Force Academy and Australian Command and Staff College, with Masters in both Defence Studies (UNSW) and Military Studies (ANU). In his extensive spare time he is undertaking a research PhD co-sponsored by the Air Force and the Sir Richard Williams Foundation. The views expressed are his alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government. #AnnLeckie #artificialintelligence #TheImperialRadch #ScienceFiction #GroupCaptainJasonBegley #AI #Ethics

  • #SciFi, #AI and the Future of War: Astro Boy, Chappie, ‘consciousness’

    Jo Brick joins our #SciFi, #AI and the Future of War series with a discussion on Western and Japanese approaches in fiction to robots and consciousness. Why is it, Jo asks, that Western approaches seem replete with gadgets rising up to end humanity while Japanese robots will happily co-exist if humans will let them? ‘Cogito ergo sum / ‘I think, therefore I am’ – Rene Descartes ‘I’m consciousness. I’m alive. I’m Chappie’[1] The concept of ‘consciousness’ or self-aware machines is explored in myriad sci-fi books and movies, including Terminator and its sequels.[2] In those movies, ‘Skynet’ becomes self-aware and launches nuclear Armageddon against the human race. However, its consciousness is not explored in any detail, and we are only left with a sense of pessimism about AI, and paranoia that the new refrigerator will conspire with the sound system to murder us in our sleep. Unlike Terminator, Neill Blomkamp’s Chappie and Osamu Tezuka’s Astro Boy (or ‘Mighty Atom’) examines the idea of self-aware machines, and what the world would look like as humans coexist with them. The relationship that the Scout – called ‘Chappie’ – develops with Ninja and Yolandi, whom Chappie calls ‘Daddy’ and ‘Mommy’, brings out a key theme of the film: about how AI systems ‘learn’, including what they learn. Chappie went from being a predictable and controllable Scout police robot to an AI system that, while initially childlike, becomes more ‘human’ as it learns and develops. Chappie also becomes protective of Deon, and also of Ninja and Yolandi – displaying a very human-like attachment to these people in his ‘life’. Chappie is caught in the battle between his Neutral Good creator, Deon, and Ninja/Yolandi’s Chaotic Evil influence.[4] The tension between Deon and Ninja/Yolandi’s approaches to Chappie show us that perhaps we have much to fear from conscious AI because we know how evil and ‘bad’ we humans can be. Chappie, the embodiment of consciousness in an AI system, is presented with a counterpoint in the form of Vincent (Hugh Jackman) and his BattleTech-like military robots called the ‘Moose’, which are human controlled systems. Vincent embodies the argument against the development of robots like the Scouts, with limited AI. In his sales pitch to a team of Johannesburg police officials, he says that AI cannot be trusted because they are unpredictable, particularly when compared with the ‘Moose’, which is tethered to a human. This argument between Deon and Vincent plays out throughout the film, through to its conclusion, and examines the popular arguments for and against the development of artificial intelligence. These arguments are reflected in public comments by tech gurus such as Elon Musk and Stephen Hawking, who were sounding warnings about the demise of humanity if the machines rise. Interestingly, while there is much negative talk in the West about the demise of humanity at the hands of sentient robots, the Japanese seem to take a different perspective. Osamu Tezuka’s ‘Astro Boy’ or ‘Mighty Atom’ explores AI consciousness but from the perspective of robot discrimination and robot rights. In the 1980s version of the TV series, Astro Boy is created by the head of the Ministry of Science – Dr Boynton – who has been tasked to create a robot with a ‘soul’. The Astro Boy origin story involves Dr Boynton replicating his son Toby, who dies in a car accident. Toby/Astro Boy is born into generally positive influences – firstly from his father, and then Dr Elefun and Daddy Walrus. Toby’s ‘robot brother’ (because they are made from the same design template) called ‘Atlas’ is made for a criminal called Skunk, who appears throughout the series. Atlas is taught how to commit crimes for Skunk, such as robbing armoured trucks. Astro and Atlas encounter each other throughout the series, with their interactions serving as a commentary on the role of human influence in the learning process for conscious AI, as they become more ‘human’. This commentary has many parallels to the main themes arising in ‘Chappie’. However, the most significant contribution of Tezuka’s ‘Astro Boy’ is its exploration of the world in which conscious robots exist alongside humans and the consequences for robots as sentient beings in terms of their rights and the discrimination they experience at the hands of humans. In an early episode called ‘The Robot Circus’, Toby is enslaved by a nefarious robot circus owner who treats his robots badly. The circus is where Toby obtains his new name, ‘Astro’ / ‘Astro Boy’. He is ultimately rescued by Dr Elefun, but then faces discrimination amongst his classmates because he is a robot. ‘The Robot President’ episode also dares to ask what happens when robots are accepted enough in human society to be able to run for the presidency. Another episode, ‘Robio and Robiet’ explores two robots that are the product of a rivalry between two scientists, and those two robots fall in love. It would seem that our Western perspective, reflected in our popular culture about robots, cannot get to this type of conversation because we are entrenched in a belief and fear that robots will be our downfall. Tezuka’s series explores issues that accept as given that robots will be useful and generally accepted parts of human society. An article in Wired magazine by Joi Ito, explores the differences between Western and Japanese approaches to robotics. Much of Japanese popular culture – from Astro Boy to Neon Genesis Evangelion, Voltron, and Gundam – take for granted the co-existence of humans and machines. Ito’s article quotes Tezuka, who commented on the fusion of all creatures, including robots. He said: ‘Japanese don’t make a distinction between man, the superior creature, and the world about him. Everything is fused together, and we accept robots easily along with the wide world about us, the insects, the rocks—it’s all one. We have none of the doubting attitude toward robots, as pseudo-humans, that you find in the West. So here you find no resistance, simply quiet acceptance.’ The Japanese seem to perceive science fiction and science as having a symbiotic relationship, reflected in advancements in robotics.[5] From the creation of an Astro Boy-like robot called Kirobo, which was designed as a companion for Japanese astronaut Kochi Wakata on the International Space Station, to the use of robots for the care and companionship of the elderly, the Japanese have demonstrated that robots have much to offer human society. Perhaps we have to take a different attitude – a more positive one. The difficulty is that we have focused on fearing the robot, maybe because we fear that it will grow up to be: Just. Like. Us. And we know what that means, right? Maybe the toaster and the refrigerator are not conspiring to murder us in our sleep – maybe they just want to make us breakfast in bed. Wing Commander Jo Brick is a Legal Officer in the Royal Australian Air Force and is currently a member of the directing staff at the Australian Command and Staff College. She has served on a number of operational and staff appointments from the tactical to the strategic levels of the Australian Defence Force. Wing Commander Brick is a graduate of the Australian Command and Staff College. She holds a Master of International Security Studies (Deakin University), a Master of Laws (Australian National University) and a Master (Advanced) of Military and Defence Studies (Honours) (Australian National University). She is a Member of the Military Writers Guild, an Associate Editor for The Strategy Bridge, and an Editor for The Central Blue. You can find follower her on Twitter at Carl’s Cantina. [1] Quote from ‘Chappie’ (2015) Quotes on IMDb. [2] An interesting article on AI and consciousness generally: Subhash Kak ‘Will artificial intelligence become conscious?,’ The Conversation, 8 December 2017. [3] Some good articles online about ‘Chappie’ include Scott Fletcher, ‘What Chappie Says, and Doesn’t Say, About Artificial Intelligence,’ Scientific American, 6 March 2015 and Luke Moffett, ‘Chappie Suggests Its time to Think about the Rights of Robots,’ The Conversation. [4] I find it easier to talk about people’s values and approach to life via the Dungeons & Dragons Alignment System. See also Wizards RPG Team, Player’s Handbook (Dungeons & Dragons) 5th edition (United States: Wizards of the Coast, 2014). Do not judge. [5] See discussion here: Angela Ndalianis, ‘Astro Boy, Science-fictionality and Japanese Robotics,’ Deletion – The Open Access Online Forum in Science Fiction Studies, 30 August 2013. #AstroBoy #artificialintelligence #Chappie #ScienceFiction #WingCommanderJoBrick #AI

  • #SciFi, #AI, and the Future of War: Chappie – David McFarlane

    David McFarlane joins the #SCiFi, #AI and the Future of War series with this discussion of the movie Chappie, directed by Neill Blomkamp. David argues the movie usefully prompts us to think through how our perceptions of AI will shape relationships, and challenges humans to consider that the notion of unintended consequences in dealing with AI may be different to previous developments. There are a number of sci-fi movies that present a future where the AI we have invented now rule over us and are competing with the human race for scarce resources. Chappie, on the other hand, envisages a future where the first invention of advanced AI takes the shape of a single predominately humanised robot. Chappie is the first robot to exhibit consciousness in a world where robots have taken over the majority of police work in Johannesburg. This happens after he is reprogrammed by his creator Deon to ‘think and feel.’ Throughout the film, Chappie grows from learning to speak and paint to an adolescent thug, to a fully sentient adult aware of his mortality. Deon’s colleague Vincent sees thinking robots as unnatural and a risk, and so between the two, we can explore what may potentially be the two schools of thoughts on advanced AI. Deon (and by extension Chappie) is initially kidnapped by three criminals, Ninja, Yolandi and Yankie, who aim to shut down the city-wide robotic police force. They then adjust their plans with the hope that Chappie will catalyse their meteoric rise to criminal stardom. In early interactions between Deon and Chappie, Deon makes Chappie promise that he will not participate in any illegal activity. Ninja, or Chappie’s ‘Dad,’ learns to bypass this promise by reframing his requests. He convinces Chappie to steal a car by claiming the car was his to begin with, and somebody else stole ‘Daddy’s car.’ Consequently, Ninja convinces Chappie that stabbing someone is not a crime; and instead was a beneficial activity in that it would put them to sleep and feel nice. Chappie refuses to participate in a heist when asked, in line with his promises to Deon. This is until he becomes fully sentient and realises that he will die when his battery runs out. In response to this, he utilises the Internet, and every bit of information humanity has known to formulate a way to transfer consciousness. Despite his promise, this endeavour necessitates and ensures his participation in the heist. A stark opponent to everything Chappie represents, Vincent shuts down the robotic police force to eliminate him. The movie culminates in the ‘Moose’ fighting chappie. The Moose is an AI weapons system that is still operated by a human (Vincent), comparable perhaps to drone pilots today mixed with advanced real-time display. This likely reflects where AI will take us in the near future in a military context. The movie concludes with Deon being shot and Chappie hoping to save him. Chappie engages in a revenge mission, acting on emotion and disregarding any care for the rule/processes of law or sentencing towards Vincent. He does not, however, execute him. He then begins transferring Deon’s consciousness into another AI shell/body. Discussion: The film opens with two telling sentiments, the first “It’s too early to tell how this will all play out. I didn’t think this would happen in my lifetime, but it is happening.” The second is “when we look at evolution, it’s not surprising that Chappie has taken this turn.” These two thoughts are poignant considerations in any discussion of the future of AI, and in looking at the developments in Chappie that we currently perceive to be impossible. The beginning segments of Chappie explore an issue that should always be a consideration in employing AI technologies: they can be stolen and/or compromised by potential adversaries. Specific to this dilemma is how susceptible organisations are to internal compromise, as seen through Vincent decommissioning the mechanised police force. The very technologies we develop to enhance our lives or capability could be the ones that are weaponised or compromised to defeat us. Now, it could be said that this is inherent in any new technology we develop. However what separates AI from the past is that when an asymmetric weapon is capable of consciousness and organic thinking, of problem-solving faster and evolving faster than us, how do we stop it? The answer that springs to mind is more or different AI, which is again vulnerable to infiltration as discussed earlier. When Deon proposes the development of sentient AI to his superior, he is met with this response: “You just came to the CEO of a publicly traded weapons company to pitch a robot that can write poetry. Why do we want it?” This retort resonated with me throughout the film and prompted a few core questions. Assuming of course that there is a future where sentient AI technology is possible, we should not ask ourselves if we should pursue such a reality, but rather why we should endeavour in such a pursuit? It is prudent that we take an effects-based approach in any such development, and always ensure the positives outweigh both potential and unknown risk. Furthermore, if we were to develop AI capable of sentient thought, should they then be entitled to the same rights we generally bestow to other intelligent beings? Finally, in the case of looking at a brilliant lone-actor in Deon, will we be powerless to stop and/or monitor said development? Also explored in Chappie is our early and sustained empathy for him. This is especially prominent during his ‘infancy’, where we see him abused, neglected and manipulated by those closest to him. His first word is “watch”, a word he learned solely through the imitation of his creator. These developments not only reaffirm the risk present in AI being dangerously misused or mislead but also raise another concern. For many humans, empathy, trust, emotional investment and indeed emotional attachment are integral parts of our lives. These facets of human nature combined with forms of AI that are increasingly integrated into our lives could reduce our ability to be impartial and critique. What if the benefits of AI become threat or risk at a slow enough rate that that attachment and trust blind us? Chappie’s increasing intelligence throughout the movie is a telling narrative. Research has indicated that that is where the future of AI lies. AI, as Chappie did, will most likely be able to evolve itself through a process called recursive self-improvement; their ability to continually make their software better. As with AI in general, the ideals within and explored around Chappie must originate from a programmer/programmers. This dictates that it is the programmer’s agenda that decides what consciousness could look like. This includes but is not limited to the rights and wrongs associated with such consciousness, what is desirable in values, interpretations of laws and so on. It goes without saying that this reflects a risk in both the military and civilian application of AI. Pilot Officer David McFarlane is an officer in the Royal Australian Air Force. The views expressed are his alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government. #artificialintelligence #Chappie #futurewarfare #ScienceFiction #AI

  • #SciFi, #AI, and the Future of War: Childhood’s End –James Groves

    James Groves joins us to review Arthur C. Clarke’s 1952 book, Childhood’s End. He suggests the book’s dystopian vision of omniscient surveillance illustrates the potential for AI to fundamentally undermine the very things that make us human. Ubiquitous surveillance. Omnipresent sensors. Centralised processing. Deep learning. Intelligent surveillance readies itself to record the next move and, all the while, this data is synthesised by a central authority who uses it to shape an entire society to their will. Arthur C. Clarke wrote Childhood’s End in 1952, four years before the first satellite was launched into space and sixteen years before man walked on the Moon. The novel tells the story of Earth and the arrival of ‘the Overlords’. These technologically, intellectually and physically superior beings appear with ominous and overwhelming force, and patiently observe Earth’s patterns of life, displays of strength and signs of independent leadership. The Overlords’ arrival in “gleaming, silent shapes hanging over every land” convinces the human race to accept them as “part of the natural order of things”. Their surveillance operation is immediate and ubiquitous – nothing escapes their notice, and their study of mankind is unrelenting. This quickly becomes the new normal. The Overlords’ unquestionable ability to know, shape and strike guarantees the subject population understands that their previous sovereignty is now a distant memory of their collective consciousness. This lesson has been painfully learnt throughout history but sets the scene for this AI-enabled dystopia. Under the Overlords, life is ordered and predictable – their surveillance allows them to know the answers to all of life’s questions. Based on their observations, Overlord mathematicians calculate the optimum population size and what types of people should comprise it. This is an uncomfortable reminder of the ability of a central authority to artificially propagate a societal balance that meets their desires and ignores natural selection and evolutionary harmony. The Overlords’ surveillance program and resultant influence over society manufactures a human network aimed at transcending space and time. An analogy from their leader is that every human’s mind is an island, surrounded by ocean and seemingly isolated. If the water recedes, however, the islands vanish and leave a continent – inter-connected and singular – and all individuality is gone. The novel’s unravelling tragedy now becomes disturbingly unnatural. The Overlords argue that their industrial-scale surveillance “saved [mankind] from self-destruction” but we see that it increasingly halts humankind’s development on personal and cultural levels. Children no longer behave like children; they communicate ‘telepathically’ through a shared sense of connectedness. Their dreams are visions of their virtual environment and their interactions enhance the Overlords’ ever-growing network.  Adults, whose minds remain unaffected, are told “you have given birth to your successors and it is your tragedy that you will never understand them, will never be able to communicate with their minds. Your successors will seem to you as utterly alien, they will share none of your joys or ambitions, will look upon your greatest achievements as childish toys”. The alienation of child from parent, youth from history and society from foundation is the result of the Overlords’ pervasive surveillance. The societal warnings in Childhood’s End provoke questions on the impact of surveillance on natural social order, the potential impact of AI on human decision-making and individual choices, and the possible erosion of those values we hold dear – creativity, industriousness, personality and affection. The very things that make us human may be at stake. Major James Groves is an officer in the Australian Army. The views expressed are his alone and do not reflect the opinion of the Australian Army, the Department of Defence, or the Australian Government.

  • #SciFi, #AI, and the Future of War: Terminator 3: Rise of the Machines – Chris McInnes

    The Terminator series is something of a touchstone for popular thinking regarding artificial intelligence. Chris McInnes joins the #SciFi, #AI, and the Future of War series to discuss the insights of Terminator 3: Rise of the Machines regarding how think about artificial intelligence and autonomous weapons systems. Terminator 3: Rise of the Machines (T3) is an interesting example of several aspects of artificial intelligence (AI) because it is the first of the Terminator movies that has more than a passing reference to the series’ principal – but hidden – antagonist, Skynet. T3, like The Terminator and Terminator 2: Judgement Day, is focused on the battle between the forces of (what will become) the human resistance and the forces of Skynet. However, a subplot in the movie deals with the character of Skynet and how it comes to seize control and trigger the nuclear holocaust of Judgement Day. While Arnold Schwarzenegger and Kristianna Loken are running around in the physical world respectively embodying typical hopes and fears regarding lethal (highly) autonomous weapons systems, Skynet is quietly working away in the background to outwit humans. Notably, the ‘quest’ aspect of movie in the physical world is focused around finding Skynet’s ‘core’ in order to destroy the system. We learn through the movie that Skynet has been under development for several years but has not yet ‘gone live’. The US Air Force general in charge of the Skynet program remains reluctant to unleash Skynet as he is not confident that humans have sufficient understanding, and therefore control, of the system to provide access to the US nuclear arsenal. As the movie progresses, we learn that pressure is building on the general to activate Skynet as the system is perceived to be the only hope of countering a a mysterious computer virus infecting (presumably US military) systems across the globe. The movie’s climax, set amongst further ‘Arnie’ and Kristianna running about shooting at each other, features a discussion between the American president and the general that leads to Skynet’s full activation. This action removes whatever human controls remained on Skynet’s ability to act and leads to the first salvo of nuclear attacks against humanity, and we also learn the mysterious virus was in fact Skynet taking control of the systems. The climax also features the general telling the future leaders of the human resistance, including his daughter, where to find Skynet’s “core”. The conclusion of the movie depicts Skynet’s launch of US nuclear missiles but also reveals that the location of the “core” is in fact a nuclear bunker that will allow some humans to survive Skynet’s attack. We see the humans realise that Skynet does not, in fact, have a core as it is a networked artificial intelligence. T3 draws out several points about how we do, and perhaps should, think about AI. AI and autonomous systems are not synonymous and need to be thought about in distinct terms. T3 was a battle between humans and Skynet but it had to be depicted through physical representations of that battle, with the ‘real’ fight on the networks only told in the background. We can see some of the limitations of this attitude in some of the discussions at the UN Convention on Certain Conventional Weapons focusing on ‘killer bots’ without sufficient regard for the broader influence of AI throughout the targeting process. The non‐physical existence of AI poses major challenge in managing our interaction with it. In T3, the ‘quest’ to destroy Skynet’s core is futile because Skynet does not have a core – it is everywhere and nowhere. Moreover, when fighting Skynet the humans faced the disadvantage that Skynet’s learning and adaptation was universally shared across the globe instantaneously while humanity’s learning was far, far slower. Yuval Harari makes this point – both positively and negative – in his book, 21 Lessons for the 21st Century. The discussion between the general and the president about activating Skynet epitomises the dangers of applying legacy thinking about capability systems to AI that can learn and adapt. Every capability we have ever built prior to AI learning systems has done only what it was engineered to do – to expand its capability required new human engineering. The general’s reluctance to finally activate Skynet because he knew this epitomises the problem with human control of learning AI: we do not know where its learning will take it. This also leads to corollary about the ‘black box’ effect of even contemporary AI – we simply do not understand how the neural nets are translating inputs into outputs. Work is underway to tackle this problem but it is already a difficult challenge, as the exponentially greater processing capacity of AI means AI can find new paths and patterns far faster than humans can figure out how the previous ones were formed. And that is without confronting a ‘self‐aware’ AI like Skynet that deliberately sets out to deceive and manipulate its human controllers! Skynet’s use of the virus to deceive humanity and manipulate it into viewing Skynet as ‘the answer’ is an interesting discussion on the potential of AI to understand human biochemical algorithms faster and better than humans can. History is replete with pursuit of panacea wonder weapons – perhaps Skynet mined this history to know exactly which buttons to push and further tailored its action to specifically target the president’s algorithms. In sum, T3 is a multi-layered, multi-faceted, and highly accessible (and entertaining) means of engaging in the discussion around AI and the future of humanity (and machines!). Wing Commander Chris ‘Guiness’ McInnes is an officer in the Royal Australian Air Force. The views expressed are his alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government. #futureconcepts #artificialintelligence #futurewarfare #ScienceFiction #AI #Fiction

  • #SciFi, #AI and the Future of War: AugoStrat Awakenings, Part II – Mick Ryan

    We are pleased to welcome back Mick Ryan with the second instalment of his short story AugoStrat Awakenings. You can find the first part of this story here. The bright lights and screens of the room hurt her eyes as she opened them after a short nap. The chair was pretty comfortable, but she would have to do something about the supposedly ‘eye-friendly’ lighting in the room. Less than frequent visits to the gym were also creating hell with her back muscles. I need to do something about that this week, knowing instantly it would be an empty promise to herself. The workload here at the moment was crushing, and she was down two augo-strat assistants. Hopefully, the next intake would fix that.  She made a quick note to speak with Jason about it. “Harden up”, she then thought to herself. No point whining about something that can’t be changed in the next five minutes. Kathy leaned forward in her chair and checked in with her augo-strat network. The Cog-Link network was closed, highly secure, and inaccessible to non-augmented personnel. It was also fantastic for streaming movies. A bright yellow message flashed at the top right of her vision. It was only visible to her – her augmentation, linking to her eyes, had pushed the message in the past two nanoseconds: THANK YOU FOR USING THE COG-LINKTM NETWORK. IT IS CURRENTLY EXPERIENCING DEGRADED PERFORMANCE. COG-LINKTM TECHNICIANS ARE INVESTIGATING THIS ISSUE. YOU MAY NOTICE SOME LATENCY ISSUES; HOWEVER, ALL NETWORK LINKAGES REMAIN SECURE. MORE UPDATES IN 5 MINUTES. THANK YOU FOR USING COG-LINKTM. “Weird.  I haven’t seen that before…” And then the first contact in her morning call list appeared. As always, this was the augo-strat adviser to the commander of the strategic strike force. Befitting the national importance of the commander and organisation in which he served, he was one of the first generation augo-strats. A little older, Carl was perhaps a little too fond of country and western music. But he was nonetheless a fine mentor for many of the junior augo-strat officers that were dispersed around the defence force. The strange update on the augo-strat network performance was forgotten as they commenced their regular check-in. No words were spoken by either; their Cog-Link allowed them to rapidly communicate in a way that was imperceptible to non-augmented humans. It also took place at a speed that would be incomprehensible to them. However, it was all recorded and uploaded on central servers, and available for review by human ethics inspectors in the Inspector Generals’ Department, as well as the neurotech-ethics board. Nothing that the augo-strat corps did was outside their remit. “Good morning, youngling!” “Morning, you old fart”. This had been a standard greeting between the two since Kathy had started her current appointment two years before. Down to business. “The mobile intermediate-range fleet are all up and good for tasking. Same with the responsive orbital launch and strike fleet.” This had not changed for the entire time Kathy had been working with Carl. Thank goodness. These two fleets of missiles represented an enormous investment of technology, people and national treasure. A fleet of land-based intermediate range strike missiles had been developed and deployed in the past decade. Mounted on truck and trains, a significant proportion of this force constantly moved to reduce its detectability. Those that were not deployed underwent constant upgrades to enhance their stealth, range and lethality. The space response fleet represented an even larger investment. Helped out by US and Indian space launch companies, the rapid launch capability could quickly replace destroyed or aged satellites. It was also highly capable of doing some destroying itself. Different variants of their fleet could target objects in both low earth and geosynchronous orbits. Generally, destruction was achieved through EMP rather than kinetic obliteration. No point contributing the massive amount of orbital debris already polluting the immediate space around earth. Or being the nation that finally sets the theorised Kessler Syndrome in motion. In a nano-second, Kathy pondered the importance and political sensitivity of this part of the defence force. Political leaders were always keen to ensure they were getting value for money with their missiles and associated targeting infrastructure. Any time these two particular fleets experienced reliability issues was a particularly uncomfortable time for the senior leaders upstairs. “You didn’t mention the maritime strike and info-war strike capabilities.” Not good. There was an unusually long delay before Carl replied. For an augo-strat, a microsecond gap in a conversation is a long time. “Degraded availability and network assurance issues.” OK, he was finally getting to the point. Again, not good. “Our augmented technicians and network assurance algorithms detected some anomalies in the last 5 minutes. I would have come straight to you, but there was not much I could tell you until my most recent update a few seconds ago.” This was NOT looking like a good start to the day. “We have detected a highly advanced attack algorithm. Our human and algorithmic technicians haven’t seen anything like it. I think we have almost quarantined it, but it is of a sophistication and aggressiveness that we haven’t seen before.” Kathy probed further, hoping for at least some light at the end of the tunnel. “How long until we have the fleet fully back online?” Carl, as always, was succinct. He was not a conversationalist. “15 minutes”. Not quite an eternity but a significant gap in coverage for the strategic strike force. “OK, I will get back to you shortly.” Kathy closed down the link. The entire exchange had taken under a minute. She rapidly pinged her priority two contact. “Hey Kathy, how’s strat ops?” Isobel ‘Izzy’ Cohen was from Kathy’s augo-strat intake. They had bonded quickly despite their different professional backgrounds. Izzy had been Army and had been a Type 20 tank task force leader who had joined the program after a training accident. Despite Kathy’s non-military background, they had formed a close friendship over the long nights of study, conversations on strategy, war, economics, philosophy and ethics, as well as the many head surgeries that were just one element of being a new recruit to the augo-strat program. “All good here. How is the joint strike force?” They exchanged updates. Nothing out of the ordinary was occurring in the largest component of the defence force. Formed in the wake of the Manus disaster, the joint strike force comprised a series of rapidly deployable joint, cross-domain formations. The old defence force structures and ways of thinking of the 2000s were as foreign to this contemporary strike force as knights in armour were to the armoured forces in Kuwait in 1991. Especially how it thought. After decades of focussing almost exclusively on the equipment ‘hardware’ of the force, a significant investment had been made in its human ‘software’. Different warfighting and strategic competition concepts had been the result of new methods of education and wargaming at all levels of policy, strategy and operations. A huge investment had also been made in their low-signature deployment capabilities as well as unmanned land, air and maritime elements over the decade. They were capable of quickly deploying very lethal, accurate forces anywhere in the region. A mix of information operations, land, maritime, air capabilities with links into space, cyber and missile forces, they were a potent form of national influence in the region. Not every deployment of this force involved humans leaving their home station. Some of their recent operations had been undertaken as entirely unmanned joint task forces. While the demography of the nation still remained healthy with a large number of people suitable for military service, only the influx of several hundred thousand drones had provided the mass and potency that allowed the defence force to pose a lethal conventional deterrent to some rather aggressive and totalitarian actors around the globe. And one that had proven its worth over the past several years. “Great to hear, Izzy. I think…” Another bright yellow alert appeared in her vision. It was Carl. KATHY. TROUBLE. ADVERSARY ALGORITHM HAS BROKEN OUT OF QUARANTINE. IT IS EVOLVING AT A SPEED WE HAVE NEVER SEEN. STRATEGIC STRIKE FORCE AVAILABILITY ASSURANCE NOW CLASSIFIED AS MEDIUM. WE HAVE MOVED ASSURED STRIKE CAPABILITIES OFFNET AND TO HIGH READINESS. MY BOSS IS SPEAKING WITH THE CHIEF NOW. I AM… It looked like the full message had been interrupted mid-transmission. Dammit. So now this is bad. Mick Ryan is an Australian Army officer. A graduate of Johns Hopkins University and the USMC Staff College and School of Advanced Warfare, he is a passionate advocate of professional education and lifelong learning. He is an aspiring (but very average) writer. In January 2018, he assumed command of the Australian Defence College in Canberra, Australia. The opinions expressed are his alone, and do not represent the view of the Australian Army, the Australian Defence Force, or the Australian Government. #artificialintelligence #futurewarfare #ScienceFiction #MajorGeneralMickRyan #AI #AustralianDefenceForce #Fiction

  • A nation needs more than a DIME – Konstantin Khomko

    We welcome Konstantin Khomko to The Central Blue with a thoughtful exploration of how we think about national power. He suggests that contemporary circumstances require a more expansive and flexible approach to understand and apply national power more effectively. Sun Tzu wrote that ‘if you know the enemy and know yourself, you need not to fear the result of a hundred battles.’ Taken in a contemporary context, it means a nation-state must understand its power, as well as the power of other actors. This knowledge allows the nation to maximise its resources and achieve specific goals efficiently. However, as it stands, our current method for describing national power is too limited. Known by its acronym ‘DIME’, this way of describing national power excludes some important levers available to the nation. We need more than just a DIME to explain how Australia will go forward in today’s world. To understand national power, it is important to understand its origins. In 1939 Edward Carr divided international political power into three categories:  military power, economic power, and the power over opinion. During the Cold War, the United States and its armed forces expanded those categories and developed a four-element schema known as DIME.[1] DIME helps explain national power by arranging national activity and outputs into diplomatic, information, military and economic elements.[2] DIME elements are derived from a nation’s resources. Resources can be considered as natural (i.e. resources and population) or social (i.e. culture, industry, politics, military).[3] In short, a nation’s resources are inputs to national power elements, while national power elements are combined in a variety of proportions to form and articulate national power. DIME addresses core elements of national power, but this schema does not encompass all of the contemporary assets of a modern nation. Alternative schemas and tools can be applied to gain an understanding of strengths and opportunities for any given nation. The primary alternatives to DIME appear to be MIDLIFE and PESTEL. Around the turn of the century, the concept of MIDLIFE was offered to replace DIME by Craig Mastapeter, a Homeland Security practitioner.[4] He included intelligence, financial, legal and law enforcement, and developmental elements in his schema. The expanded concept of MIDLIFE reminds the audience that certain contemporary issues are vital considerations. With that said, MIDLIFE only brings one genuinely unique element to the equation – legal and law enforcement – as the intelligence and financial elements are present as sub-categories of DIME. The second alternative, PESTEL, is a tool that enables the separation political, economic, social, technological, environmental and legal domains.[5] PESTEL as a useful alternative schema for representing elements of national power, and therefore providing categories to be used as a means to achieving national power, is plausible. PESTEL brings two unique categories; science and technological, and environment. Science and technology play a pivotal role in the transformation of the economy into a knowledge economy. The Environment is prominent in world affairs and has an immediate impact on nation states. Political and social domains are already present within DIME. A combination of the three schemas (DIME, MIDLIFE and PESTEL) would offer the most balanced approach to articulating national power elements in ways that address contemporary issues. A combination of the three would produce DIME SEL, and would have the following elements: Diplomatic Information Military Economic Scientific and Technological Environmental Legal and Law Enforcement The benefit of using DIME SEL is its ability to maintain core national power categories while incorporating the elements that provide an advantage in the modern world. It is important to note that some elements are direct governmental outputs (diplomatic, information, legal and law enforcement, and military) while other elements are shaped by government legislation and policy (i.e. economic, scientific and technological, and environmental). That means we are talking about a nation’s power, not just that of the government. It also means a nation’s government has influence over all DIME SEL elements and can shape them for use—or not—if they consider the possibilities beforehand. Of course, you could dismiss DIME SEL as old wine in new bottles. Alternatively, you could rightly ask about where it might end: how many additional elements should we add? Further, the importance of coalitions is unquestionable in today’s conflict environment, so does national power really matter? These questions can be answered directly: No; As many as needed; and Yes. National power, and how you describe it, still matters because governments and their agencies must understand the variety of resources available to them. They must understand how to apply assets asymmetrically (i.e. by using a financial asset to solve a military problem). They must continually think about resources and how to combine them to create new assets and elements. Moreover, they must ensure that when the strategy is developed, all those with the potential to help are around the table – whether they are inside or outside the national security community; and inside or outside government. National power – and so DIME SEL – will help those learning the trade to understand what might be available to a government when challenges arise. Regardless of the schema selected to assess national power, all possible instruments need to be efficient and effective in order to give government options to meet challenges. Carr’s original concept forms the core of all national power elements; however, contemporary society requires greater attention on areas that are not explicitly addressed by Carr. The modified schema, DIME SEL, has Carr’s categories at its core while adding contemporary assets. This schema enhances DIME for today’s world and provides a superior way to describe the nation’s power to those learning that the government can call upon more assets than just the agency they work for. Pilot Officer Konstantin Khomko is an officer in the Royal Australian Air Force with over 14 years of military service. He is attached to the Joint Doctrine Directorate of the Australian Defence Force Headquarters while completing tertiary studies in electrical engineering at the University of New South Wales. His professional interests include renewable energy and cyber security. The views expressed are his alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government. [1] D.R. Worley, Orchestrating the Instruments of Power: A Critical Examination of the U.S. National Security System (Potomac Books, 2015) [2] R.M. John, ‘All Elements of National Power’: Re-Organizing the Interagency Structure and Process for Victory in the Long War,’ Strategic Insights, 5:6 (2006). [3] D. Jablonsky, ‘National Power’ in J.Boone Bartholomees, Jr. (ed.), U.S. Army War College Guide to National Security Policy and Strategy (Carlisle, PA: US Army War College, 2004). [4] C.W. Mastapeter, ‘The instruments of national power: achieving the strategic advantage in a changing world’ (MA Thesis, US Naval Postgraduate School, 2008). [5] UNICEF, ‘SWOT and PESTEL,’ 14 Sep 2015. #DIME #EHCarr #NationalPower #Strategy

  • Exercise Red Flag 19-1: Perspectives from the Ground – Jenna Higgins

    Military exercises provide an opportunity to observe how Defence doctrine is put into action in the field. Here, Jenna Higgins provides lessons and reflections from her participation in Exercise Red Flag 19-1. As March rolls around, so ends another Exercise Red Flag Nellis (RF-N); an annual, month-long international exercise held at Nellis Air Base, Nevada. During our joint #highintensitywar series with From Balloons to Drones, Dr Brian Laslie explained how Exercise Red Flag was created by the United States Air Force (USAF) as a response to the Service’s experience of high-intensity warfare during the Vietnam War. He highlighted the exercises’ role in meeting the requirement for pilots to experience the realistic scenarios needed to prepare air forces for the challenges they might encounter in the future. Red Flag, however, offers more than just training for fast-jet aircrew. It provides one of the few opportunities to exercise a near full combined air operations centre (CAOC), as well as an opportunity to integrate non-kinetic effects (NKE), command and control (C2) and intelligence, surveillance and reconnaissance (ISR)  into a fighter,  focused high-intensity scenario. The CAOC at Nellis (CAOC-N) provides AOC personnel with the ability to interact with both constructive and live fly serials, with training running in parallel to the night live-fly events. While not every division of the CAOC-N operates, it does enable integration of the Combat Operations Division (COD) and the Intelligence Surveillance and Reconnaissance Division (ISRD). There are limits to the realism of a constructive scenario, but the advantages of AOC integration into RF-N extend further than the hard limits of execution. Perhaps the most significant benefit of such an exercise is the relationships formed between coalition partners and the subsequent trust it engenders. As proposed by Wing Commander Chris McInnes in a previous The Central Blue post; Air power C2 remains a social activity fundamentally and personal links are particularly important in reducing friction between organisations. For the moment, virtual presence remains actual absence. As our air forces surge toward a complete fifth-generation capability, and with technology dominating much of the discussion, we must continue to maintain person-to-person links to ensure the effort is harnessed and focused in a similar direction. Exercise RF-N enabled this to occur at all levels, in all divisions with Australian, British and Americans sharing all of the leadership positions. A Royal Australian Air Force F/A-18 Hornet aircraft await their next Exercise Red Flag 19-1 missions on the flight line at Nellis Air Force Base, Nevada, USA. (Source: Australian Department of Defence) The full inclusion of coalition partners into RF-N once again highlighted the limitations of security classifications and communications infrastructure. Despite having robust sharing agreements between five-eyes partners, there are still a number of instances where communications systems do not support free-flowing mission data (both pre- and post-mission). This is not a new lesson. Despite the years we have been working together, information sharing remains an ongoing issue, one not likely to go away any time soon. Consequently, it is beholden to all participants not to get frustrated by such impasses, but rather to be adaptive and flexible in their approach to international exercises. Person-to-person debriefing remains a valid form of communicating lessons learnt; however, this does present a baselining issue in ensuring all participants are receiving the same feedback. Exercises such as Red Flag, where five-eyes partners can operate at higher classifications, further enable personnel to gain greater insight into coalition capabilities and how to employ them best. While hard facts and figures are often available in ‘smart books,’ the realities of employment can present a delta. Fully understanding coalition capabilities becomes far more critical as fifth-generation capabilities become more integrated into the fight. Aircraft such as the F-35 offer capabilities that require fourth-generation aircraft to reassess how they can best fit and most efficiently contribute to the fight. This is especially relevant for the incorporation of NKE capabilities, or ISR optimisation and collection. In its current construct, RF-N is not optimised for the full utilisation of NKE or ISR collection. However, RF-N does enable mission and package leads exposure to a full suite of capabilities for which integration must be considered. Exercise RF-N kicked off the 2019 series of joint and coalition exercises designed to provide Australian and US military training focused on the planning and conduct of mid-intensity ‘high-end’ warfighting. The road to Talisman Sabre will include a number of exercises, with the penultimate Exercise occurring from late June to August 2019. Keep an eye out on The Central Blue for regular updates! Squadron Leader Jenna Higgins is an Air Combat Officer in the Royal Australian Air Force, and a Co-Editor at The Central Blue. The views expressed are hers alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government. #ExerciseRedFlag #RoyalAustralianAirForce #MilitaryTraining #ExerciseTalismanSabre #SquadronLeaderJennaHiggins

bottom of page