Office of Sponsored Programs

The Limits of AI and Big Data Technology

  • Published


The DoD  is placing a tremendous amount of faith in AI and Big Data approaches to complicated and wicked problems. However, data scientists recognize that the fundamental nature of challenges determine whether AI and Big Data are likely to produce the desired effects. What assumptions currently pervade military culture about AI and Big Data that, from a social science perspective, are inaccurate and counterproductive? What are the differences in AI and Big Data applications depending on whether the challenge is fundamentally an engineering question or a sociopolitical one? What are the limitations of AI and Big Data techniques in irregular warfare as a sociopolitical exercise and, therefore, their appropriate use as tools? What is the impact of limited data on training models and developing reliable tools, especially in sociopolitical applications? What is the impact on generating reliable training data results from needing answers quickly and without prior curation? What techniques could be employed to limit or prevent response bias unconsciously embedded in reporting that could skew results once fed into AI and Big Data analytical models? What techniques should be employed to ensure that data feeding AI and Big Data algorithms prevent confirmation bias due to biased reporting from prominent analytical frameworks diplomatic, information, military, and economic/political, military, economic, social, information, and infrastructure/areas, structures, capabilities, organization, people, and events, etc. or from cultural emphasis on certain factors at the expense of other, possibly more important, ones? How might social science methodology be taught to ensure AI and Big Data algorithms are populated with reliable data?