Search

The Subprime Strategy Crisis: Failed Strategic Assessment in Afghanistan - War on the Rocks

ersamoyor.blogspot.com

In June 2006, former congressman Marty Meehan opened a hearing about the role of U.S. Special Operations Command before a subcommittee of the House Armed Services Committee by stating that he had “grown increasingly pessimistic about our overall philosophy [in the post-9/11 wars].” He continued, “I am faced with the prospect that we might not be applying military resources in the most prudent and effective manner. … Have we failed to accurately interpret the nature of this conflict?”

The rapid collapse of the Afghan government and security forces 15 years after Meehan’s observations are stark evidence that his observations were prescient. Yet, Meehan’s remarks raise the larger question of why the United States would continue to expend military resources in a wasteful manner for 20 years if so many knew that the strategy in Afghanistan was ineffective. One answer to this question lies in Meehan’s behavior rather than in his words. Despite his sense that the military’s approach in Afghanistan and Iraq was failing, he still voted to pass both the 2007 appropriations and authorization acts, giving the military the resources and authorities it said it needed to successfully prosecute both wars. The continued pursuit of ineffective strategy in Afghanistan for two decades was not the result of singular decisions by presidents or generals. It was the result of thousands of leaders — military and civilian — behaving like Rep. Meehan.

There are two key elements to understanding how the military’s continued implementation of a sub-optimal strategy in Afghanistan resulted from thousands of individuals rather than the decisions of a few key leaders. First, the nature of irregular warfare makes it extremely difficult to assess the effectiveness of a strategy, leading the military to rely on misleading tactical and operational data to measure strategic progress. Without obvious strategic metrics, neither civilians nor military leaders had a clear understanding of strategic progress. Second, with the military offering positive assessments only a few individuals had the requisite knowledge to challenge, there were few incentives for civilians to stop rewarding the military, which reinforced the military’s existing approach. For the military, Congress’ tacit approval and the distribution of individual and organizational rewards created perverse incentives for officers at all levels to misrepresent information. This mutually beneficial process became self-reinforcing for both military and civilian leadership, making it extremely difficult to change strategy or end the war entirely. The behavior of Special Operations Command in the two decades after 9/11 illustrates this pattern.

Assessing Special Operations in Afghanistan

U.S. Special Operations Command was not in charge of the overall effort in Afghanistan, but it was one of the military organizations most involved in the war. The command’s previous statements emphasize its central role, stating that it “took the lead for DoD [Department of Defense] in defeating the Taliban in Afghanistan.” Special Operations Command was placed in charge of synchronizing and coordinating global counter-terrorism efforts in 2005 and was even more central to the effort in Afghanistan after President Barack Obama declared an end to combat operations in 2014. Special operators were not only some of the last soldiers to leave Afghanistan, but they were also the first to arrive in October 2001. At that time, the command was only 14 years old and had already survived strong opposition from the military services, making its leadership particularly sensitive to increasing its resources, prestige, and influence as a means of ensuring organizational survival. Special Operations Command did not intentionally prioritize organizational rewards over strategic progress, but the positive feedback it received from tactical successes kept its leaders from asking the right questions to properly assess its strategy.

Special Operations Command, consistent with most major military organizations, has a directorate responsible for strategic assessment and a separate directorate for strategy, plans, and policy. Interviews I conducted with senior officers who formerly served in these directorates revealed the disconnect between strategy and assessments. The officers told me that no one was truly responsible for assessing how well the command’s strategy worked historically and incorporating those findings into future planning. There was limited interaction between the two directorates, but the bigger problem was that leaders viewed themselves as independent commanders at a fixed point in time, mostly unconcerned with what happened before they were in command. The directorate responsible for strategic assessments was primarily focused on current operations and future resource requirements rather than understanding the effectiveness of the strategy up to that point. According to one officer I interviewed, “In Afghanistan, assessments were used to help the commander tell his story [to higher headquarters and Congress], not to help inform the commander.” Even though the existence of Special Operations Command was no longer in question after 9/11, its legacy of having to fight for resources and relevance led the command to use assessments as a tool to demonstrate its value rather than shape its strategy.

The data that special operations units in Afghanistan collected were helpful for painting the picture of what those units did (often referred to as “measures of performance”), but they did little to inform either the commander or external actors on the strategic effects of their actions. Among the most frequent measures were the number of operations special operations units conducted, the number of partner forces they trained, and the number of enemy fighters killed or captured. For example, the command touted its 1,000 air assault operations in 2006, the fact that it had killed or captured 600 Taliban leaders and an additional 2,000 fighters in 2010, and its record of successfully training nearly 11,000 Afghan Local Police in 2012. Though Special Forces were very effective at training their counterparts how to fight, the tactical training was wasted without a more cohesive and sustained effort across the military and civilian agencies to build local, rather than national, defense institutions. The metrics Special Operations Command reported may have indicated short-term, tactical success, but the number of operations it conducted and the number of fighters it killed had little bearing on strategic success in Afghanistan.

In Afghanistan, useful, irrelevant, and subjective measurements were all packaged together in an attempt to create a broad understanding of strategic progress. While there were potentially more important metrics military leaders could have tracked and reported, such as the defection rate of insurgents, the real problem was the flawed thinking that tactical and operational results aggregate to strategic progress in a counter-insurgency. The dynamics of strategic assessments in Afghanistan had much in common with the subprime mortgage crisis from 2007 to 2010. The crisis was the result of high-risk mortgages being packaged together with lower risk loans and sold together as mortgage-backed securities. Most of these securities were rated as “investment grade” by ratings agencies that were generously compensated and incentivized to rate securities higher than their makeups warranted. Loans often became repackaged so many times that few people know what was in them — they only knew the overall rating of the security. When borrowers began to default, individuals and banks who owned the securities also lost billions, exposing the fragility and interconnectedness of the entire system.

Similarly, the dynamics in Afghanistan had become so complex that few individuals understood them. But, when general and flag officers delivered their assessments to their civilian overseers, they were nonetheless given “investment grade” stamps of approval from officers who were responsible for thousands of lives and in charge of an institution that commanded more respect than the institutions that oversaw them. The few organizations that challenged the military’s reports of progress, like the Special Inspector General for Afghanistan Reconstruction, did not have the same visibility or public support as the military, meaning there were no incentives for elected officials to value their conclusions over those of military leaders. When early defeats of Afghan security forces resulted in a cascade of surrenders to the Taliban, the military’s progress reports were revealed to be as much of a façade as the securities ratings. Though misrepresentative data may have been fed into the system, the military’s poor strategic assessments were not a result of leaders engaging in a grand conspiracy to deceive the American public. They resulted from the inherent difficulties of understanding success in irregular warfare, and they did not improve as the war progressed because civilians continued to reward the military for its strategy and assessment practices.

Positive Feedback Loops: Organizational Rewards and Strategic Maintenance 

Since Congress legislated Special Operations Command into existence in 1987, the two bodies have enjoyed a close relationship. In addition to providing annual statements to the defense and appropriations committees, the command has a robust legislative affairs section and regularly briefs subcommittees on its activities. In an interview, a former professional staff member on the House Armed Services Committee said that members were like “moths to a flame” attending classified briefings with Special Operations Command, but they “would just go and watch sexy direct action raid videos and call it oversight, while only about ten percent of members would ask tough or relevant questions” to the command. Another staff member added that “kill/capture missions are much more salient for Congress” because they are easy to understand, achieve immediate results, and are easy to quantify. When these tactical successes fit with a broader story about strategic progress, members of Congress have little ability or incentive to challenge the military’s assessment of the war — even if they otherwise feel that the strategy is failing. As a result, most members behave like Rep. Meehan did and continue to grant the military what it requests.

The rewards Congress has granted to Special Operations Command are relatively easy to quantify. Total military personnel in Special Operations Command roughly doubled from 2001 to 2020, while the total military force shrank by 1.5 percent over the same period. Special operations officers were also rewarded with more promotions and career opportunities. My analysis shows that, between the Goldwater-Nichols reforms in 1986 and 9/11, only one of 44 (2 percent) service chiefs, geographic combatant commanders, and the chairman of the joint chiefs had a special operations background (Henry Shelton). From 9/11 through 2020, the number jumped to eight of 46 (17 percent). The numbers were even more dramatic for the Army’s leadership, where seven of 17 (41 percent) Army chiefs of staff or geographic combatant commanders serving after 9/11 had a special operations background (Northern Command is excluded because it did not exist prior to 9/11). Importantly, each of the leaders with special operations backgrounds was from a unit that specializes in direct action, like the Rangers or SEALs, rather than from a unit that take a long-term approach to addressing drivers of instability or building local militaries like Special Forces. These promotions further reinforced the view that senior defense officials and Congress preferred short-term, tactical success that could be easily measured.

The command’s budget grew from 1.2 percent of total defense spending in 2001 to 1.9 percent in 2020, but the growth is far greater once items like service support to special operations are accounted for. In 2013, Adm. William McRaven, then head of Special Operations Command, requested $143 million from Congress to meet a “critical need” by upgrading video systems on remotely piloted aircraft to high definition. McRaven characterized the upgrades as providing “game-changing, operational effects.” The funding request was approved with no requirements to demonstrate either the operational or strategic effects of the upgrade. To be clear, I am not arguing that there were no positive effects from this spending, nor am I arguing that Special Operations Command has not had important successes in Afghanistan and in dozens of other countries around the world. But, when it comes to irregular wars like the counter-insurgency in Afghanistan, civilian and military leaders have allowed tactical and operational results to supplant strategic success.

The set of tangible and visible rewards created not only incentives for Special Operations Command to maintain the course, but also pressure down the chain of command to continue emphasizing the types of operations to which civilians paid closest attention. Although special operations units had a wide range of missions and responsibilities in Afghanistan — including combatting Taliban propaganda, identifying the most critical gaps in local governance, and building local security units — most special operations forces were expected to prioritize direct action missions. In an interview I conducted with a former Special Forces officer who served in Afghanistan, he confessed, “If you didn’t put something sexy like body counts on your briefing slides, your career was at risk.” 

Assessment in Irregular Warfare Beyond Afghanistan

As with the Vietnam War, one narrative emerging from the war in Afghanistan is that America’s strategic failure was the result of two decades of lies and cover-ups from the Pentagon and White House. There is no doubt that senior civilian and military leaders misled the public at times about the war effort, but the larger problem was the inability to understand the type of war the United States found itself in and assess progress. An Army colonel who served in the early stages of the war said that the military “collected all sorts of statistics, [but] it was hard to know what conclusions to draw.” As Congress used the flawed conclusions as a basis to reward some military organizations, it was clear that positive reinforcement incentivized those units to stay the course rather than seek drastic change. The continued pursuit of a flawed strategy in Afghanistan therefore had as much to do with problematic patterns of civil-military relations as it did with military decision-making.

Irregular warfare will continue to play a central role as the United States and its strategic competitors seek to advance their political agendas in ways that avoid direct military confrontation. If the military hopes to make strategic contributions to America’s competition with states like Russia and China, it should start by improving its ability to conduct strategic assessment in irregular warfare. Although external agencies, such as the Special Inspector General for Afghanistan Reconstruction, play an important role, there are limits to their impact. As long as military engagements remain relatively low cost and far from the United States, watchdog reports may not generate enough public pressure to force the White House or military to change their strategy.

The military should internally improve its ability to create, implement, and assess strategy in irregular warfare. Entrepreneurial projects like the Irregular Warfare Initiative are impactful, as is the new congressional directive to establish a center for security studies in irregular warfare, but professional military education should also both retain lessons from Afghanistan and improve the overall instruction of irregular warfare strategy. Further, the understanding of the military as an apolitical institution subject to Samuel Huntington’s notion of objective civilian control should be abandoned in favor of a more nuanced and accurate understanding of civil-military relations. Military education should use the work of scholars like Morris Janowitz and Risa Brooks as guides to show how the military influences policy and potentially undermines its own strategic effectiveness. Apart from improved education, military commands should reconsider how they are organized to conduct planning and assessment. The two functions could be fully integrated into a single directorate with a responsibility to look backward rather than just forward. Strategic assessment could be insulated from command influence by creating components of the inspector general or the Government Accountability Office to conduct independent strategic assessment for each combatant command. Until civilian and military leaders develop a more complete understanding of how their relationship affects strategic outcomes and not simply resources, the United States will continue to implement wasteful and sub-optimal strategies in irregular wars.

Cole Livieratos is an Army strategist and veteran of the war in Afghanistan. He holds a Ph.D. in international relations, is a non-resident fellow at the Modern War Institute, and is a term member at the Council on Foreign Relations. Follow him on Twitter @LiveCole1. 

The views expressed are those of the author and do not reflect the official policy or position of the U.S. Army, the Department of Defense, or any other branch or agency of the U.S. government.

Image: U.S. Army (Photo by Sgt. Jillian G. Hix)

Adblock test (Why?)



"strategy" - Google News
September 15, 2021 at 02:58PM
https://ift.tt/3Ccu3ej

The Subprime Strategy Crisis: Failed Strategic Assessment in Afghanistan - War on the Rocks
"strategy" - Google News
https://ift.tt/2Ys7QbK
https://ift.tt/2zRd1Yo

Bagikan Berita Ini

0 Response to "The Subprime Strategy Crisis: Failed Strategic Assessment in Afghanistan - War on the Rocks"

Post a Comment

Powered by Blogger.