            In the United States Court of Federal Claims
                                           No. 19-405C

                                 (Filed Under Seal: July 23, 2019)

                                    (Reissued: July 31, 2019)

                                                 )
  RED CEDAR HARMONIA, LLC,                       )     Post-award bid protest; propriety of the
                                                 )     procuring agency’s technical evaluation;
                 Plaintiff,                      )     claim of unequal treatment
                                                 )
           v.                                    )
                                                 )
  UNITED STATES,                                 )
                                                 )
                 Defendant,                      )
                                                 )
           and                                   )
                                                 )
  NEXTGEN FEDERAL SYSTEMS,                       )
  LLC,                                           )
                                                 )
                  Defendant-Intervenor.          )
                                                 )

       W. Brad English, Maynard, Cooper & Gale, P.C., Huntsville, Alabama, for plaintiff.
With him were J. Andrew Watson, III, and Emily J. Chancey, Maynard, Cooper & Gale, P.C.,
Huntsville, Alabama.

        Sonia W. Murphy, Trial Attorney, Commercial Litigation Branch, Civil Division, United
States Department of Justice, Washington, D.C., for defendant. With her on the briefs were
Joseph H. Hunt, Assistant Attorney General, Civil Division, and Robert E. Kirschman, Jr.,
Director, and Steven J. Gillingham, Assistant Director, Commercial Litigation Branch, Civil
Division, United States Department of Justice, Washington, D.C., and Latonya M. McFadden,
Defense Information Systems Agency, Office of the General Counsel, Fort Meade, Maryland.

       C. Peter Dungan, Miles & Stockbridge P.C., Washington, D.C., for defendant-intervenor.
With him were Jason A. Blindauer, Alfred Wurglitz, and Christopher Denny, Miles &
Stockbridge P.C., Washington, D.C.

                                    OPINION AND ORDER1



       1
        Because of the protective order entered in this case, this opinion was initially filed under
seal. The parties were requested to review this decision and provide proposed redactions of any
LETTOW, Senior Judge.

        Red Cedar Harmonia, LLC (“Red Cedar”) protests the decision of the Defense
Information Systems Agency (the “Agency” or “DISA”) to award a software contract to
NextGen Federal Services, LLC (“NextGen”). The procurement calls for software to “fulfill[]
[a] requirement[] for independent verification and validation[] of software” across the
Department of Defense’s command and control portfolio. AR 1-9. Red Cedar contends that the
Agency committed procurement error by: (1) making irrational and arbitrary adjustments to Red
Cedar’s labor hours in its proposal; (2) using these arbitrary and irrational numbers when
evaluating price; (3) violating relevant procurement law by irrationally and arbitrarily evaluating
proposals under two subfactors of the technical/management approach evaluation factor; and (4)
allowing these alleged errors to impact the source selection decision, making the Agency’s
decision to award the contract to NextGen arbitrary and capricious. Due to these alleged errors,
Red Cedar asks this court to: (1) declare that the Agency’s evaluation of proposals and award
decision was arbitrary and capricious, an abuse of discretion, and contrary to law; (2) declare that
the Agency’s “discussions” were misleading, unequal, and not “meaningful;” (3) permanently
enjoin the Agency from proceeding with contract performance based on the current evaluation
and award decision; and (4) require the Agency to reopen the procurement and perform a new
evaluation. Compl. at 14.

       The United States (the “government”) produced the administrative record on April 12,
2019, ECF Nos. 27-38, and corrected it on April 25, 2019, ECF Nos. 45-47; Order Granting
Motion to Amend/Correct (April 25, 2019), ECF No. 43.2

        Red Cedar filed a motion for judgment on the administrative record on April 29, 2019.
Pl.’s Mot. for Judgment on the Admin. R. (“Pl.’s Mot.”), ECF No. 48. The government
responded to Red Cedar’s motion and cross-moved for judgment on the administrative record on
May 13, 2019. Def.’s Cross-Mot. for Judgment on the Admin. R. & Def.’s Opp’n to [Pl.’s Mot.]
(“Def.’s Cross-Mot.), ECF No. 49. NextGen filed its cross-motion and response on the same
day. Def.-Intervenor’s Cross-Mot. & Resp. to [Pl.’s Mot.] (“Def.-Intervenor’s Mot.”), ECF No.
50. Red Cedar replied to the government’s and NextGen’s responses and cross-motions on May
17, 2019. Pl.’s Reply & Resp. to Defs.’ [Cross-Mots.] (“Pl.’s Reply”), ECF No. 51. The
government and NextGen filed responses on May 24, 2019, see Def.’s Reply to [Pl.’s Resp.]
(“Def.’s Reply”), ECF No. 52; Def.-Intervenor’s Reply to [Pl.’s Resp.] (“Def.-Intervenor’s
Reply”), ECF No. 53, and the court held a hearing on the cross-motions on May 30, 2019, see
Hr’g Tr. (May 30, 2019).3



confidential or proprietary information. The resulting redactions are shown by asterisks enclosed
within brackets, e.g., “[***].”
       2
        The administrative record is consecutively paginated, divided into 79 tabs, and is more
than 5,300 pages. Citations to the record cite to the tab and page, as “AR [tab]-[page]” (e.g., AR
36-1770).
       3
           Subsequent citations to the hearing will omit the date.

                                                   2
                                             FACTS4

        In this procurement, the Agency released its first Combined Acquisition Strategy and
Plan (“the Plan”) on May 17, 2018. See AR 1-1. The Plan called for a “Command and Control
[] Test and Evaluation [] contract [that] will be an overarching approach for fulfilling
requirements for independent verification and validation of software across the [command and
control] [p]ortfolio.” AR 1-9. The Plan addressed four major programs already in existence: (1)
the Global Command and Control System-Joint (“GCCS-J”);5 (2) the Global Combat Support
System-Joint (“GCSS-J”);6 (3) the Joint Planning and Execution Services (“JPES”);7 and (4) the
Global Command and Control System-Joint Enterprise (“GCCS-JE”).8 AR 1-9. The programs
were “developed independently of each other” and thus “some of the foundational services that
support each system [were] “duplicative.” AR 1-11. The Plan, then, would replace the status
quo, where “multiple contract vehicles” were used to meet the test and evaluation requirements.
AR 1-13. Thus, the ultimate goal of the procurement was to “ensure current operational system
support and life cycle test requirements while simultaneously streamlining the organization[’]s
software test and evaluation capabilities and services.” AR 1-12. The Agency expected the
Plan, consisting of one base year and four option years pursuant to 48 C.F.R. (Federal
Acquisition Regulation (“FAR”)) § 52.217-8, to cost $43,691,488.16. AR 1-15.9 In addition, the

       4
        The recitations that follow constitute findings of fact by the court from the
administrative record of the procurement filed pursuant to Rule 52.1(a) of the Rules of the Court
of Federal Claims (“RCFC”). See Bannum, Inc. v. United States, 404 F.3d 1346, 1356 (Fed. Cir.
2005) (specifying that bid protest proceedings “provide for trial on a paper record, allowing fact-
finding by the trial court”).
       5
         The GCCS-J is “a suite of mission applications that provides critical joint war fighting
[command and control] capabilities and is the principal foundation for dominant battlespace
awareness, providing an integrated, near real-time picture of the battlespace necessary to conduct
joint and multinational operations.” AR 1-9.
       6
         The GCSS-J system is a logistics program that “integrates data from . . . identified
authoritative data sources to provide a fused, integrated, near real-time, multi-dimensional view
of combat support and combat service support across joint capability areas, providing situational
awareness of the battlespace and logistics pipeline.” AR 1-9 to 10.
       7
        JPES is “a portfolio of capabilities that support the policies, processes, procedures and
reporting structures needed to plan, execute, mobilize, deploy, employ, sustain, redeploy, and
demobilize activities associated with Joint Operations. JPES capabilities support the deliberate
planning, crisis action planning[,] and global force management processes.” AR 1-10.
       8
        GCCS-JE provides similar functions to GCCS-J, as it also “provides critical joint
warfighting [command and control] capabilities . . . necessary to conduct joint and multinational
operations.” AR 1-10.
       9
         The Agency expected the base year to cost $9,463,016.42; the first option year
$8,316,468.68; the second option year $8,402,707.29; the third option year $8,635,184.20; and
the fourth option year $8,874,111.57. AR 1-15.
                                                 3
Agency also included an option for a 15% “surge,” AR 1-15, a period in which the government
would require additional support for one or more of the identified tasks, AR 3-57. The
government initially estimated this optional “surge” would cost $6,158,075.32, bringing the total
potential cost of the contract to $49,849,563.48, inclusive of the option years. AR 1-15.

        To evaluate proposals, the Agency employed “best value trade off [] evaluation
procedures using technical/management, past performance, and price as the evaluation criteria.”
AR 1-20; AR 3-163. When combined, technical/management and past performance would be
“significantly more important than price.” AR 1-20; AR 3-165 (solicitation). Thus, in accord
with FAR § 15.101-1, it was possible for the Agency to select an offeror that was not the lowest
priced option or the highest technically rated option. AR 1-20.

        The technical/management approach criteria branched into six subfactors: Subfactor 1 -
security clearance requirements; Subfactor 2 - technical approach to test execution; Subfactor 3 -
technical approach to test event build support; Subfactor 4 - technical approach to cybersecurity
and information compliance; Subfactor 5 - technical approach to test process improvement and
test automation support; and Subfactor 6 - management/staffing approach. AR 3-164.
Subfactors 3 and 6 are pertinent to Red Cedar’s protest.

       The government used a two-step evaluation process due to the sensitive nature of the
contract. First, the government would evaluate the submitted proposals as either “acceptable” or
“unacceptable” under Subfactor 1, the security clearance requirement. Any proposals rated as
not acceptable under Step 1 would be culled from the procurement and not evaluated further.
AR 3-164. In short, Step 1 would act as a filter, eliminating any need for the government to
evaluate any proposals that failed the security clearance requirement.

        For Step 2, the surviving bids, i.e., those with an acceptable security clearance rating,
would be evaluated under the remaining factors and subfactors. See AR 3-165 to 170. Each of
the remaining technical/management subfactors, i.e., subfactors 2 through 6, would be assigned
an individual combined technical/management rating and risk rating. AR 3-165. These ratings
ranged from “unacceptable” to “marginal,” “acceptable,” “good,” and “outstanding.” AR 3-165.
Past performance would be evaluated based on recency, relevancy, and quality, AR 3-167 to 169,
for an ultimate performance confidence rating ranging from “no confidence,” to “substantial
confidence,” AR 3-169. Price and cost would be evaluated by the government to determine if
was “reasonable, realistic, and complete . . . through techniques as described in FAR [§] 15.404.”
AR 3-170. The cost of each proposal would be evaluated for “realism” using “probable costs”
instead of “proposed costs.” AR 3-170.10 “Probable costs [would be] calculated by adjusting
CPFF [Cost-Plus-Fixed-Fee] CLINs [Contract line item numbers] based on results of the cost
realism analysis.” AR 3-170.




       10
          In other words, the government would calculate the cost it estimated would be required
to actually complete the offeror’s proposed contract plan, displacing the cost proposed by the
offeror.

                                                4
       The Plan was revised on July 13, 2018. AR 2-31. The revision reflected a significant
increase in price for optional “surge” support from 15% to 40%. AR 2-33. This increased the
government’s expected total life-cycle cost to $60,113,022.34. AR 2-33.

        The Agency released the solicitation on July 26, 2018. AR 3-34. The initial deadline for
offers was August 27, 2018 at 12:00 p.m. AR 3-34. The solicitation contained thirteen core
contract line item numbers (“CLINs”). AR 3-35 to 39. CLIN 0001 was a firm-fixed-price
(“FFP”) CLIN detailing contract management. AR 3-35. CLIN 0002 was an FFP CLIN for a
“contract transition plan.” CLINs 0003 to 0006 were direct support line items for the GCCS-J
program. AR 3-35 to 36.11 CLINs 0007 to 0013 were for direct support line items for the JPES
program. AR 3-37 to 39.12 In addition, the solicitation also contained CLIN 9999, the optional
“surge support” contract line item. AR 3-57. Each of the four option years contained CLINs
identical to the base year. See, e.g., AR 3-39 (“CLIN 1001”). The performance work statement
of the solicitation detailed the tasks involved in each CLIN. E.g., AR 3-35.

        The solicitation also provided offerors with the expected number of “test events” for each
year of performance. AR 3-59 to 60. These “test events” were to “[e]nsure that the [command
and control portfolio] capabilities are fully security compliant, installed[,] and configured per
delivered documentation in a . . . test environment, and tested per the established test plan.” AR
3-59. Put differently, the “test events” would represent the primary deliverables of the
solicitation. For the operational and management component of the GCCS-J program, the
government anticipated 442 “test events” per year. AR 3-59 to 60.13 For the operational and
management component of the JPES portfolio, the government anticipated 17 “test events” per
year. AR 3-59 to 60.14 Finally, the government anticipated nine “test events” for research,

       11
         CLIN 0003 was a cost-plus-fixed-fee (“CPFF”) line item for test support. AR 3-35.
CLIN 0004 was a CPFF line item for external project support. AR 3-36. CLIN 0005 was a
CPFF contractor training CLIN. AR 3-36. And, CLIN 0006 was a cost CLIN for other “direct
costs.” AR 3-36.
       12
          CLIN 0007 was a CPFF line item for operational and management test support for
JPES. AR 3-37. CLIN 0008 was a CPFF line item for operational and management external
project support. AR 3-37. CLIN 0009 was a CPPF line item for operational and maintenance
contractor training. AR 3-37. CLIN 0010 was a CPPF line item for research, development,
testing, and evaluation test support. AR 3-38. CLIN 0011 was a CPFF line item for research,
development, testing, and evaluation external project support. AR 3-38. CLIN 0012 was a
CPFF line item for research, development, testing, and evaluation contractor training. AR 3-38.
CLIN 0013 was a cost line item for other operations and maintenance direct costs in support of
JPES testing. AR 3-39.
       13
          The 442 “test events” encompassed four “major release events,” seven “minor release
events,” 428 “[information assurance vulnerability alert] and emergency release events,” and
three “[foreign military sales]-related events.” AR 3-59 to 60 (capitalization removed).
       14
         The 17 “test events” consisted of seven “minor release events” and ten “[information
assurance vulnerability alert] and emergency release events.” AR 3-59 to 60 (capitalizations
removed).
                                                5
development, testing, and evaluation support for the JPES portfolio. AR 3-59 to 60.15 The
solicitation, however, did not include a set mix of labor or a list of labor categories beyond six
“key personnel.” AR 5-425.16 Rather, the Agency asked each bidder to offer a “unique”
approach to the solicitation, including proposing their own labor mix. See AR 4-353
(instructions), 369 (evaluation); see also AR 4-375 (Agency answers to industry questions).

       The solicitation was subsequently modified three times over the next month. See AR 4-
310; AR 5-397; AR 6-449. The first amendment, Amendment 0001, was released on August 16,
2018, AR 4-310, and responded to offeror questions and extended the due date to August 30,
2018 at 2 p.m. AR 4-310.17 Amendment 0002 was released on August 20, 2018, AR 5-397, and
modified one of the answered questions in Amendment 0001, while answering additional
questions, AR 5-397, 448.18 Amendment 0003 extended the proposal due date to August 31,
2018. AR 6-449.

       The Agency initially received five proposals along with those from four potential
subcontractors. The five offerors were CompQSoft, AR 8-467, Intellect Solutions, LLC, AR 10-
915, Maximum Technology Corporation, AR 13-1615, NextGen, AR 14-1896, and Red Cedar,
AR 15-2169. Two of the subcontractors, Ad Hoc Research Associates, AR 7-451, and Jacobs
Technology, Inc., AR 11-1293, were subcontractors for NextGen. See AR 14-1897.

        On September 18, 2018, the government sent out letters to three offerors informing them
that their proposals would not be evaluated because of various deficiencies. AR 16-2468, 17-
2470, 18-2472.19 Only Red Cedar and NextGen survived to Step 2 and further evaluation.



       15
         The nine “test events” were listed as one “major release event,” three “minor release
events,” and five “[information assurance vulnerability alert] and emergency release events.”
AR 59 to 60.
       16
         The six “key personnel” were: (1) program manager; (2) test engineer (lead); (3) test
engineer (manager); (4) database administrator; (5) information assurance engineer; and (6)
systems administrator. AR 5-425.
       17
           In response to a question asking about total labor hours and a labor category
breakdown, the government responded that “[t]his is a performance-based requirement,” and that
“[i]t is up to the offerors to propose a mix of hours and labor categories . . . that they determine
necessary in order to meet the requirements of the [Solicitation].” AR 4-375.
       18
         Amendment 0002 brought the total number of answered questions to 68. See AR 5-
448. Many of the questions arose due to the government’s decision to have the offerors propose
their own labor employment mix. See, e.g., AR 5-448 to 50.
       19
          CompQSoft failed to include a statement regarding “organizational and consultant
conflicts of interest” and therefore failed to conform with the instructions of the Solicitation and
was excluded. AR 16-2468. Intellect Solutions, LLC’s proposal failed at the Step 1 security
clearance requirement and was excluded from consideration. AR 17-2470. Maximum
Technology Corporation’s proposal was filed late and excluded. AR 18-2472.
                                                  6
        The Agency then started its initial analysis of the bids of Red Cedar and NextGen. See,
e.g., AR 19-2473 (initial price analysis). From the initial analysis, the two bidders had
significantly different proposed prices and hours. See AR 19-2473. Red Cedar initially
proposed a total of [***] hours with a total cost of [***] for all years and the optional “surge.”
AR 19-2473. NextGen, by contrast, proposed a total of [***] hours with a total cost of [***] for
all years and the optional “surge.” AR 19-2473. The initial analysis also noted that the
independent government cost estimate (“IGCE”) was for 562,991 hours with a total cost of
$64,538,008. AR 19-2474. The government issued NextGen three “[n]otice[]s” based on its
price evaluation: (1) NextGen used a national number for its wage data instead of a localized
one; (2) two labor categories had “unbalanced pricing;” and (3) NextGen did not provide a
summary of its purchasing system. AR 19-2479 to 80. Red Cedar was only issued one
“[n]otice:” to explain its use of a 2017 escalation factor for annual changes in direct labor costs
that appeared low. AR 19-2484.

        For the past performance factor, NextGen submitted three prior contracts for evaluation.
See AR 20-2486 to 94. After reviewing the prior contracts, the Agency assigned NextGen a
“substantial confidence” rating because “[prior contract 2] was somewhat relevant, [prior
contract 3] was relevant, and each had [e]xceptional levels of quality.” AR 20-2493. Red Cedar
also provided three prior contracts for the Agency to review. See AR 21-2495 to 2502. Unlike
NextGen, however, only one of Red Cedar’s contract was considered “relevant” with a quality
level of “very good.” AR 21-2496, 2500 to 02. Therefore, the government assigned a
“satisfactory confidence” rating to Red Cedar. AR 21-2501 to 02.

        The Agency also conducted an initial technical evaluation of Red Cedar’s and NextGen’s
proposals for each of the technical/management subfactors. Both bidders received “acceptable”
ratings for Subfactor 1. AR 22-2505; 24-2524. For Subfactors 2, 3, and 5, NextGen received a
“good” rating with one strength in each subfactor and no weaknesses or deficiencies. AR 22-
2505 to 07, 2509 to 10. For Subfactor 4, NextGen received an “acceptable” rating with no
strengths, weaknesses, or deficiencies. AR 22-2508 to 09. But for Subfactor 6, the management
and staffing approach, NextGen received a “marginal” rating. AR 22-2513 to 15. NextGen
received one strength for [***]. AR 22-2514 to 15.20 NextGen also received two weaknesses in
this category. AR 22-2515. [***]. AR 22-2515. The second, and more important, weakness
related to the total number of hours proposed for NextGen’s proffered technical approach. AR
22-2515. The Agency provided NextGen with 38 positions that required adjustments to hours.
See AR 22-2516 to 18. The Agency considered neither weakness to be “significant,” however.
AR 22-2518.

        Red Cedar, like NextGen, also received a “good” rating for Subfactors 2 and 5, with each
Subfactor having one strength and no weaknesses or deficiencies. AR 24-2524 to 25, 2528 to
29. Unlike NextGen, however, Red Cedar received an “acceptable” rating for Subfactor 3, with
no strengths, weaknesses, or deficiencies. AR 24-2525 to 26. Red Cedar further received a



       20
         The technical evaluation specifically noted [***]. AR 22-2515. A Scrum Master is an
individual who facilitates team cooperation and information exchange, and who helps remove
impediments to the team’s progress.

                                                 7
“marginal” rating for Subfactor 4, with the Agency noting one “significant weakness” due to
“[***].” AR 24-2528.

       Notably, Red Cedar also received a “marginal” rating in Subfactor 6, the technical
management approach, with the Agency identifying four weaknesses in the proposal. AR 24-
2532 to 33. Red Cedar’s first three weaknesses were because its proposal “lack[ed] [***],”
“[***].” AR 24-2532 to 33. For the fourth weakness, the Agency, as it did for NextGen,
provided Red Cedar with 58 positions from its proposal that required adjustments to the hours
proposed. See AR 24-2533 to 37. The Agency did not provide either Red Cedar or NextGen
with specific hourly requirements; rather, it stated that specific categories “appear low” or
“appear high.” See, e.g., AR 27-2580 to 84; AR 26-2565 to 68.

       On January 7, 2019, following the completion of the initial evaluations and the
development of a competitive range, see AR 25-2539 to 53, the government sent final proposal
requests and evaluation notices to both Red Cedar and NextGen. See AR 26-2554 to 68; AR 27-
2569 to 86.21 The Agency sent Red Cedar six evaluation notices, see, e.g., AR 27-2570, and
NextGen five evaluation notices, see, e.g., AR 26-2557. The Agency requested both offerors
submit their revised final proposals by January 14, 2019 at 2 p.m. E.g., AR 26-2554.

        Both offerors submitted complete and timely proposals. See, e.g., AR 28-2587; AR 29-
2870; AR 30-2894; AR 31-3106; AR 32-3205. In response to the concerns raised by the
Agency, both Red Cedar and NextGen increased their total yearly hours and price and remedied
other technical and price issues noted by the Agency in its initial evaluation.22 Red Cedar
increased its hours from [***] to [***] total hours, AR 27-2580; AR 34-3470, and its price from
[***] to [***]. Compare AR 19-2473, with AR 35-3470. NextGen increased its total hours
from [***] to [***] total hours, and its price from [***] to [***]. Compare AR 19-2473, with
AR 35-3470. NextGen stated in its response that it had “considered the [g]overnment’s feedback
on hours . . . and have adjusted hours accordingly.” AR 28-2599. While Red Cedar also
adjusted its labor hours, it did not provide a written response to the government’s concerns
regarding its labor mix. AR 30-3011. Subsequently, the Agency sent a “clarification” e-mail to
Red Cedar on January 16, 2019, in which the Agency explained how it reached the estimated
hours used for its realism analysis. AR 33-3298.23

        On February 5, 2019, the government determined that a “[s]econd [r]ound of
[d]iscussions” was required to select an awardee. E.g., AR 36-3486. And due to time

       21
        The evaluation notices sent to the two offerors reflected the weaknesses and questions
developed by the Agency during its initial evaluation.
       22
           For example, NextGen revised its proposal to use the correct Ft. Meade, Maryland area
for its salaries instead of the national average. See AR 28-2589. Red Cedar addressed
cybersecurity concerns. See AR 30-3013.
       23
          Red Cedar initially submitted its proposal with [***] hours for the base year and [***]
for the option years. AR 33-3298. The Agency revised this number upwards by “taking the total
hours from the blue boxes for each of the performance periods in each of the CLIN worksheets.”
AR 33-3298; see also AR 34-3469.

                                                8
constraints, the Agency requested responses by February 8, 2019 at 2 p.m.. AR 36-3486. Both
NextGen and Red Cedar were sent eight evaluation notices based on their revised proposals.
See, e.g., AR 36-3489; AR 37-3508. In its notices to NextGen, the government noted three
positions, down from 38, that required upward adjustment.24 AR 36-3503 to 04. The Agency
also found NextGen’s proposed price of [***] to be unrealistic, and calculated the probable cost
to be [***]. AR 36-3499

       Red Cedar’s revised proposal generated more concern by the Agency. See AR 35-3483
to 84. The Agency found that 52, down from 58, labor positions still required adjusting. See AR
37-3520 to 25. Further, the Agency found Red Cedar’s proposed price of [***] to be unrealistic
by 330% when compared to the government’s probable cost estimate of [***]. AR 37-3516.
This price disparity demonstrated a concern for the Agency that Red Cedar did not have a “clear
understanding of the magnitude of the [contract] requirements.” AR 35-3484. The Agency also
found that Red Cedar failed to bid a price for the option years or 6-month extension period for
CLIN 0002 and was uncertain whether Red Cedar made a mistake, would provide no services, or
was offering free services. AR 35-3484; AR 37-3510.25

        Due to a missing price template that the Agency requested both offerors use, the due date
for the revised proposals was extended to February 11, 2019. Over the next several days, the
contracting officer answered questions raised by the two offerors related to the revised proposal.
See, e.g., AR 45-3990; AR 48-3995. Red Cedar expressed confusion about how the Agency
calculated “probable” cost and inquired into the Agency’s process. AR 45-3990. The
contracting officer sent responses to both offerors explaining the Agency’s cost realism
obligations under FAR § 15.404-1(d). See AR 46-3991; AR 47-3993.

        Both NextGen and Red Cedar submitted their revised final proposals on February 11,
2019. See AR 53-4230; AR 54-4620; AR 55-4714; AR 56-4788; AR 57-4843. NextGen’s final
proposal estimated a total of [***] hours to complete the contract, inclusive of the option years
and surge support, for a total proposed cost of $56,523,087. AR 61-5138. Red Cedar’s final
proposal estimated [***] hours to complete the contract, inclusive of option years and surge
support, for a total proposed cost of [***]. AR 61-5138. The government then conducted its
final evaluation of the two proposals. See, e.g., AR 60-5105; AR 61-5138; AR 62-5156. It




       24
         The government also raised several other, more minor concerns with NextGen’s
proposal. See AR 35-3477 to 79. For example, NextGen did not use the most current data for its
salary survey calculations. AR 35-3478.
       25
         Like NextGen, Red Cedar’s other notices were for more mundane concerns such as
using an incorrect start date. See AR 35-3484; AR 37-3512 to 13.
                                                9
reached the following conclusions:




AR 71-5341.

        The Agency found that NextGen had “resolved to the [g]overnment[’s] satisfaction” the
outstanding notices, including the total labor hours proposed. See, e.g., AR 60-5120, 5124,
5126, 5128, 5130, 5137. In addition, the Agency found NextGen’s proposed price and labor mix
to be realistic. AR 61-5141 to 44. Red Cedar also resolved the majority of its outstanding
notices. See, e.g., AR 62-5187; 5189; 5192. But the Agency found Red Cedar’s labor mix in its
proposal to still have deficiencies, as 37 positions still “appear[ed] low” based on the
government’s evaluation. See AR 62-5168 to 85. The 37 positions ranged across the CLINs and
resulted from Red Cedar’s failure to [***]. See, e.g., AR 62-5169 to AR 70-5181. Many of the
labor categories had been increased in response to the Agency’s first request for revisions in
January, but then reduced for the final proposal after the second request for revisions in
February. See AR 62-5169.

         Due to the deficiencies, the government revised Red Cedar’s labor hours upwards for
certain positions during its evaluation of the CPFF CLINs and the cost-based surge CLIN. AR
61-5138. FFP CLIN’s were not adjusted. AR 61-518. The government calculated that Red
Cedar’s probable cost, inclusive of all option years and the “surge,” would be $73,372,403 based
on its labor mix. AR 61-5138. The resulting price difference of $26,117,668 between the
proposed and evaluated costs “significantly increases the [g]overnment’s cost risk during
performance.” AR 61-5152.

        Based on its evaluation, the Agency awarded the contract to NextGen. See AR 67-5237
to 39; see also AR 69-5241 to 42. According to the Agency’s best value award analysis,
NextGen’s proposal “had four documented strengths,” while Red Cedar’s proposal had “two
documented strengths.” AR 67-5237. Therefore, “the highest rated offeror, taking into
consideration all non-price factors, is [NextGen]. [NextGen] is also the lowest priced
(evaluated) offer.” AR 67-5237. The Agency found that Red Cedar’s “proposal is not
technically superior and does not warrant a tradeoff to award to the higher priced offer.” AR 67-
5237. At bottom, the contracting officer found NextGen’s “proposed approach is technically
superior and more beneficial to the [g]overnment,” and awarded the contract accordingly. AR
67-5237. Red Cedar was notified on February 26, 2019, and requested a debrief on March 4,
                                               10
2019. AR 68-5240; AR 70-5338. After an initial written debrief, AR 71-5340, Red Cedar sent
additional questions to the Agency, specifically inquiring about how it calculated Red Cedar’s
evaluated price, AR 72-5377. The Agency responded on March 11, 2019, largely echoing its
previous responses to Red Cedar about the Agency’s cost realism responsibilities under FAR §
15.404-1. AR 73-5382. A stop work order was issued by the Agency three days later, on March
14, 2019. AR 74-5384. Red Cedar filed its complaint in this court on March 15, 2019. Compl.
at 1.

                                JURISDICTION & STANDING

        This court has jurisdiction over bid protests pursuant to the Tucker Act, 28 U.S.C. §
1491. The Tucker Act vests this court with jurisdiction to “to render judgment on an action by
an interested party objecting to a . . . proposed contract or to a proposed award or the award of a
contract or any alleged violation of statute or regulation in connection with a procurement or a
proposed procurement.” 28 U.S.C. § 1491(b)(1).

        A threshold issue is whether Red Cedar has standing to challenge the award decision, a
burden it bears as plaintiff. See Myers Investigative & Sec. Servs., Inc. v. United States, 275 F.3d
1366, 1369-70 (Fed. Cir. 2002) (citing Steel Co. v. Citizens for a Better Env’t, 523 U.S. 83, 102-
04 (1998), and quoting Lujan v. Defenders of Wildlife, 504 U.S. 555, 561 (1992)). To
demonstrate standing under 28 U.S.C. § 1491(b)(1), Red Cedar must be an “interested party”
who suffered prejudice from a significant procurement error. CliniComp Int’l, Inc. v. United
States, 904 F.3d 1353, 1358 (Fed. Cir. 2018). An interested party is an actual bidder who had a
substantial chance at award of the contract. Id.; see also Hyperion, Inc. v. United States, 115
Fed. Cl. 541, 550 (2014) (quoting Orion Tech., Inc. v. United States, 704 F.3d 1344, 1348 (Fed.
Cir. 2013)). An interested party suffers prejudice from a significant procurement error when
“but for the error, it would have had a substantial chance of securing the contract.” CliniComp,
904 F.3d at 1358 (emphasis in original).

       Red Cedar’s allegations and the administrative record indicate that it is an interested
party who has sufficiently alleged prejudice. Red Cedar was one of two eligible bidders
evaluated by DISA. If taken as true, the alleged errors would enhance its competitive position
and potentially provide a basis for award. Neither the government nor the defendant-intervenor
contests Red Cedar’s ability to bring this protest. Therefore, Red Cedar has standing and the
court has jurisdiction.

         STANDARD OF REVIEW FOR A MOTION FOR JUDGMENT ON THE
                       ADMINISTRATIVE RECORD

        The standards of the Administrative Procedure Act (“APA”), 5 U.S.C. § 706, govern the
court’s review of a protest of the government’s decisions regarding award of a contract. See 28
U.S.C. § 1491(b)(4) (“In any action under this subsection, the courts shall review the agency’s
decision pursuant to the standards set forth in section 706 of title 5.”). Under the APA, the court
may set aside a government procurement decision that is “arbitrary, capricious, an abuse of
discretion, or otherwise not in accordance with law,” 5 U.S.C. § 706(2)(A), subject to the
traditional balancing test applicable to a grant of equitable relief. See PGBA, LLC v. United
States, 389 F.3d 1219, 1224-28 (Fed. Cir. 2004); Hyperion, 115 Fed. Cl. at 550.


                                                11
        The court may not “substitute its judgment for that of the agency,” Hyperion, 115 Fed.
Cl. at 550 (quoting Keeton Corrs., Inc. v. United States, 59 Fed. Cl. 753, 755 (2004) (in turn
quoting Citizens to Preserve Overton Park, Inc. v. Volpe, 401 U.S. 402, 416 (1971), abrogated
on other grounds as recognized in Califano v. Sanders, 430 U.S. 99, 105 (1977))), but “must
uphold an agency’s decision against a challenge if the ‘contracting agency provided a coherent
and reasonable explanation of its exercise of discretion,’” Id. (citing Axiom Res. Mgmt., Inc. v.
United States, 564 F.3d 1374, 1381 (Fed. Cir. 2009)). The court may overturn the government’s
procurement decision only “if ‘(1) the procurement official’s decision lacked a rational basis; or
(2) the procurement procedure involved a violation of regulation or procedure.’” Centech Grp.,
Inc. v. United States, 554 F.3d 1029, 1037 (Fed. Cir. 2009) (quoting Impresa Construzioni
Geom. Domenico Garufi v. United States, 238 F.3d 1324, 1332 (Fed. Cir. 2001)). In conducting
the rational basis analysis, the court looks to whether the “the contracting agency provided a
coherent and reasonable explanation of its exercise of discretion,” Axiom, 564 F.3d at 1381
(quoting Impresa Construzioni, 238 F.3d at 1333), and affords “contracting officers . . .
discretion upon a broad range of issues,” AgustaWestland N. Am., Inc. v. United States, 880 F.3d
1326, 1332 (Fed. Cir. 2018) (quoting Impresa Construzioni, 238 F.3d at 1332-33). Accordingly,
“the disappointed bidder bears a heavy burden of showing that the award decision had no
rational basis.” Centech, 554 F.3d at 1037 (quoting Impresa Construzioni, 238 F.3d at 1332-33).
Protests alleging a violation of regulation or procedure “must show a clear and prejudicial
violation.” Axiom, 564 F.3d at 1381 (quoting Impresa Construzioni, 238 F.3d at 1333).

                                           ANALYSIS

         The case before the court presents two core questions. First, Red Cedar alleges the
Agency improperly evaluated its labor mix and added hours to its proposal in an arbitrary and
irrational manner. These increased hours, Red Cedar argues, were then used by the Agency to
calculate Red Cedar’s evaluated price, i.e., the price the Agency believed Red Cedar’s proposal
would truly cost. Second, Red Cedar alleges that the Agency acted in an irrational and arbitrary
manner in evaluating Subfactors 3 and 6 by using hidden and uneven evaluation criteria that
benefitted NextGen and harmed Red Cedar. According to Red Cedar, these errors led the
Agency astray in its final evaluation by artificially lowering Red Cedar’s technical rating and
artificially raising Red Cedar’s price. The court, then, must determine (1) if the Agency acted
improperly in increasing the number of labor hours in Red Cedar’s proposal and (2) if the
Agency unfairly evaluated proposals under Subfactors 3 and 6.

  I.   The Agency’s Adjustment of Red Cedar’s Proposed Management/Staffing
       Approach and Price

        The crux of Red Cedar’s complaint lies with the Agency’s evaluation of, and subsequent
modification to, its labor mix. The Agency’s evaluation resulted in a significant number of hours
being added to Red Cedar’s labor mix, its “acceptable” rating under Subfactor 6, and the
increased cost evaluated for realism. Red Cedar divides its argument into four segments based
on the type of CLIN the Agency adjusted: (1) contract transition plan; (2) test support; (3)
contractor training; (4) external project support. See Pl.’s Mot. at 16-25.

        Before discussing each of Red Cedar’s allegations of error, the court notes that each of
the adjustments to the CLINs was made pursuant to the Agency’s technical evaluation. This
court “gives great deference to an agency’s technical evaluation of an offeror’s proposal.” L-3

                                                12
Commc’ns. EOTech, Inc. v. United States, 87 Fed. Cl. 656, 664 (2009). Technical ratings by the
Agency “involve discretionary determinations of procurement officials that a court will not
second guess.” Id. (quoting E.W. Bliss Co. v. United States, 77 F.3d 445, 449 (Fed. Cir. 1996)).
This deference is heightened for cases involving highly technical subject matter. See id. (quoting
Electro-Methods, Inc. v. United States, 7 Cl. Ct. 755, 762 (1985) (“judicial restraint is
appropriate and proper” in such circumstances) (additional citation omitted)). The subject matter
in this bid protest involves highly technical and advanced software designed to support complex
military operations. The court, therefore, proceeds with caution.

        The court also notes that it appears Red Cedar attempted to make a significant
modification to its final proposal without adequate explanation. Red Cedar may have been
operating under the mistaken assumption that any increase in its total hours would also increase
the government’s evaluated price. See AR 45-3990 (“[The] government[’]s comments suggest
that we will need to increase the hours for several labor categories. If we do that . . . will [it]
further increase our price which in turn will increase the government’s probable cost?”). In
short, Red Cedar at one point may have believed that any additional hours would continue to
drive up the probable cost, rather than shrinking the difference between proposed and expected
cost.

                             A. Contract Transition Plan: CLIN 0002

        Red Cedar first argues that the Agency improperly added 480 hours to CLIN 0002, which
concerned a “contract transition plan.” Pl.’s Mot. at 16-18. According to Red Cedar, the
Agency increased hours for four positions due to the “need for report writing” and “the need for
transition.” Id. at 17-18 (citing 62-5169).26 Red Cedar primarily contends that the Agency
“failed to take into account [Red Cedar’s] transition plan,” which [***]. Id. at 18. Red Cedar
also argues the increases were unreasonable because “there w[ere] no report writing
requirements associated with this task,” and “[***].” Id. at 17-18.

        The court disagrees. The Agency, in its initial evaluation, found Red Cedar’s hours for
CLIN 0002 to be unrealistic and recommended Red Cedar increase its hours across six positions.
See AR 79-5406. Red Cedar responded by accepting the Agency’s recommendations for its
revised proposal. See AR 79-5406. But for the final evaluation, Red Cedar changed its proposal
yet again, significantly decreasing the number of hours for four identified positions. The
Agency then increased the hours for the deficient positions to the previously proposed numbers.
See AR 79-5406 (comparing the final “Recommended Hours” with the “Interim Proposed
Hours”). Red Cedar, though, also increased hours in other positions. See AR 79-5406. This
modification was not adequately explained by Red Cedar, however, and resulted in a higher
overall number as the Agency restored hours to positions it believed would be required to
adequately complete the contract while leaving increases to other positions. See, e.g., AR 55-
4738, AR 79-5406. In addition, the Agency also provided Red Cedar with a written explanation
describing why it believed Red Cedar’s hours to be too low after each evaluation. See, e.g., AR
62-5169 (for example, the Agency thought the [***] “hours appear low, based on the Offeror’s
proposed approach, due to the need for transition of [the] [***]”).



       26
            The Agency added hours to the [***] positions. See AR 79-5406.
                                                 13
         Rather than being arbitrary, the Agency was consistent from the initial to the final
evaluation. It had numerous concerns about Red Cedar’s proposal, including its transition plan.
See, e.g., AR 24-2532 to 35. From the beginning, the burden was on Red Cedar to convince the
Agency that its proposed plan was reasonable and provided the government with the best value.
It failed in that task. It is irrelevant that NextGen’s proposal did not include a document
specialist. See Pl.’s Mot. at 17. The record indicates that the Agency found NextGen’s proposal
to be more technically sound, and that NextGen worked to improve its proposal from the initial
evaluation. Compare AR 22-2516 to 18 (38 positions required adjustment in NextGen’s initial
proposal), with AR 36-3503 to 04 (3 positions required adjustment after NextGen’s first revised
proposal). Further, the Agency also did not add a blanket amount of hours across CLIN 0002 for
Red Cedar. Instead, it specifically targeted deficient positions and added hours based on its
evaluation. From the record, the court cannot conclude that the Agency’s decisions lacked a
rational basis. See Centech, 554 F.3d at 1037 (quoting Impresa Construzioni, 238 F.3d at 1332-
33).

        Further, this is an area of significant agency discretion. See L-3 Commc’ns., 87 Fed. Cl.
at 664. As stated, this is a highly technical procurement involving complex computer programs.
It would be inappropriate for the court to wade into the underbrush and attempt to determine
what number of hours are necessary for a “[***]” to perform the contract transition tasks. See id.
And there is no need, as the Agency provided a consistent and coherent explanation at each stage
of the evaluation.

       Consequently, the court does not find the Agency’s decision to be inappropriate with
regards to CLIN 0002.

                                     B. Test Support CLINs

       Red Cedar next alleges that the Agency acted improperly when adding a significant
number of hours to the test support CLINs: 0003, 0007, and 0010. See Pl.’s Mot. at 18-23.
These CLINs were designed to support the various test events that would occur throughout a
given contract year: See, e.g., AR 5-400. As the hours added back to each of the test support
CLINs represented the bulk of the total added hours, see AR 79-5406 to 10, the court will
address each individually.

       1. CLIN 0003 (GCCS-J).

         Red Cedar first argues that the government acted irrationally by adding back 11,000
hours to CLIN 0003. See Pl.’s Mot. at 20-22. According to Red Cedar, the hours added and the
explanation by the Agency were “inconsistent with the Agency’s evaluation of [Red Cedar’s]
prior iterations,” id. at 21, and the Agency “disregarded [Red Cedar’s] approach to [***],” id. at
22.

        The argument about inconsistency is repeated throughout Red Cedar’s brief but is
unpersuasive. There is no inconsistency. The Agency continually told Red Cedar that its hours
for specified categories appeared low due to the “need to perform information assurance
engineering on multiple releases,” AR 79-5406, and attempted to guide Red Cedar to the
problem areas of its proposal. See, e.g., AR 27-2580 to 84. Red Cedar’s revised final offer
increased some hours beyond the recommended amount and reduced others, while some

                                                14
remained static. See AR 79-5406. The Agency’s final evaluation only added hours for Red
Cedar’s proposal to reach the previously approved proposals and to address previously identified
problem areas.

        Further, it is well within the Agency’s discretion to “disregard” Red Cedar’s approach.
See E.W. Bliss Co, 77 F.3d at 449. Red Cedar’s final proposal failed adequately to [***] for
CLIN 0003 were intended to [***]; see AR 55-4729 to 32; see also AR 79-5406 to 07. It is
reasonable, then, for the Agency to add back hours to ensure that the bare minimum
requirements of the contract could be completed. And, as before, the court will not engage in the
task of guessing how many additional hours are “too many” for this highly technical bid protest.
Thus, the court cannot conclude from the record that the Agency’s decision lacked a rational
basis.

       2. CLIN 0007 (JPES O&M).

       Red Cedar next argues that the Agency erred by adding 10,690 hours to CLIN 0007
because CLIN 0007 “only [had] 17 anticipated events per year, and no major events scheduled.”
Pl.’s Mot. at 22-23. In short, Red Cedar argues that because the total number of test events for
CLIN 0007 regarding JPES was significantly less than the number of test events for CLIN 0003
regarding GCCS-J, the difference in hours was an unreasonable addition by the Agency:




                                               15
AR 5-400.

        Red Cedar’s contention takes a myopic view of the test events and the solicitation. As
the government notes in its brief, JPES is “a portfolio of capabilities,” as opposed to a single
program. Compare AR 3-59 (emphasis added), with AR 3-58 (“GCCS-J is the Department of
Defense [] joint [command and control] [s]ystem of [r]ecord for achieving full spectrum
dominance.”); see also Def.’s Cross-Mot. at 28. Red Cedar’s [***] is divorced from context and
rightly gave the Agency pause. In response, the Agency was well within its discretion to reject
Red Cedar’s approach and add back additional hours to its proposal. And as with the other
CLINs, the Agency was consistent in its recommendations and provided a written rationale. See,
e.g. AR 37-3522 (“Hours still appear low, due to need to test multiple releases of JPES across
multiple systems.”). Consequently, the court finds the Agency’s decision to add 10,690 hours to
CLIN 0007 had a rational basis.

       3. CLIN 0010 (JPES RDT&E).

        Red Cedar makes identical arguments with respect to CLIN 0010. See Pl.’s Mot. at 23
(“there were only nine such [test] events anticipated each year”). Here, the government added
back 6,660 hours to Red Cedar’s proposal. See AR 79-5409. But as with CLIN 0007, it appears
that Red Cedar misunderstood the nature and complexity of the JPES portfolio. Although there
may have been fewer overall test events for CLIN 0010, these test events were of a more
complex nature than the 428 “IAVA and Emergency Release Events” for the GCCS-J test CLIN.
See AR 3-59 to 60. And, as with CLIN 0007, the Agency added back hours in a manner
consistent with its approach from the initial evaluation and provided Red Cedar with a rationale.
See AR 79-5409; see also, e.g., AR 37-3524 (“[T]he hours still appear low, based on the
[o]fferor’s approach, due to [the] need to test multiple releases of JPES across multiple
systems.”). The court thus finds the Agency had a rational basis by adding back hours to CLIN
0010.

   C. Contractor Training CLINs: 0005 (GCCS-J); 0009 (JPES O&M Contractor Training);
      0012 (JPES RDT&E)

        Red Cedar next argues that the Agency unreasonably added back 1,114 hours to the
CLINs related to the training of the Agency’s “personnel in the process and procedures
[required] to perform the activities under the subject contract.” Pl.’s Mot. at 23 (citing AR 5-
410). According to Red Cedar, it “expected [***].” Id. at 23-24. In essence, Red Cedar
contends the Agency “ignored [Red Cedar’s] plan to [***],” and that “the Agency’s adjustments
did not [] take into account the number of scheduled events for each system.” Id. at 24. Red
Cedar also notes that the Agency did not uniformly increase hours for all positions across the
contractor training CLINs and that NextGen’s proposal had similar deficiencies. Id. at 24-25.

        The court finds the Agency acted reasonably. As with the other CLINs, the Agency acted
within its discretion to reject Red Cedar’s training approach as insufficient based on its
evaluation. See, e.g., E. W. Bliss Co., 77 F.3d at 449. It would be inappropriate for the court to
make an uneducated technical evaluation regarding the sufficiency of Red Cedar’s contractor
training plans. See L-3 Commc’ns., 87 Fed. Cl. at 664. And, although the Agency may not have
been explicit, it is evident from its initial evaluation that it found Red Cedar’s approach to be
lacking. See, e.g., AR 79-5407; see also AR 37-3522 to 25. The Agency concluded Red Cedar’s

                                               16
plan did not [***], e.g., AR 79-5407, 5409, 5410, and Red Cedar’s proposal did not [***], see
AR 55-4738 to 40. As previously discussed, Red Cedar’s looking purely at the number of test
events, Pl.’s Mot. at 24-25, rather than the complexity and depth of those events, is incorrect.
The JPES CLINs were for a portfolio of tasks, rather than a single program. Thus, [***] would
not make sense given the contract’s parameters.

        In addition, the Agency did not uniformly increase the number of hours for the “Engineer
Junior” for all three CLINs, contrary to Red Cedar’s argument. See Pl.’s Mot. at 24-25. Instead
of simply adding hours across the CLINs, the record demonstrates the Agency took a tailored
approach based on its evaluation of Red Cedar’s proposal. See AR 79-5407 to 10. Although not
explicit, the Agency appears to accept Red Cedar’s approach to [***]. See AR 79-5407. Red
Cedar’s proposal failed to take a similar approach for CLIN 0009 and CLIN 0012, however, and
the Agency adjusted accordingly. See AR 79-5408 to 10.

         D. External Project Support CLINs: 0008 (JPES O&M); 0011 (JPES RDT&E)

        Finally, Red Cedar argues that the Agency acted in an arbitrary and irrational manner
when it added 600 hours back to CLINs 0008 and 0011, the optional external support CLINs.
See Pl.’s Mot. at 25. According to Red Cedar, the Solicitation did not “forecast[] a single []
related event or any other external events,” and “[c]onsequently, Red Cedar proposed [***].” Id.
Put differently, Red Cedar contends that the Agency erred in assigning it hours for an optional
component of the Solicitation that [***].

         The court disagrees. Red Cedar’s argument omits the fact that although the two CLINs
were optional, Red Cedar proposed a total of [***] for the two CLINS in its initial and first
revised proposal, respectively. See, e.g., AR 79-5408, 5410. Yet for its final proposal, Red
Cedar [***]. See, e.g., 79-5408, 5410. The Agency, then, was in the dark regarding Red Cedar’s
intentions about the optional CLINs and thus added back hours to its previously recommended
amount. See AR 37-3523 to 24 (notifying Red Cedar that “[h]ours still appear low”). The court
therefore finds it was reasonable for the Agency to add hours back for the optional CLINs that
Red Cedar previously planned to complete. The burden is on Red Cedar, not the Agency, to set
out with clarity its plan for completing the contract. Further, these CLINs were optional in that
the Agency could choose whether to exercise them. But, as with the option years, the offerors
still needed to bid on performance, and the agency would evaluate cost as if it would exercise the
option. AR 3-163 to 64.

        Overall, Red Cedar appears to believe that the Solicitation’s terms require the Agency to
give its proposal deference. See Pl.’s Mot. at 14-15 (“The evaluation team was required to
‘consider the offeror’s unique proposed approach, which may be realistic even if it differs from
the IGCE and historical approaches.’”) (emphasis removed). But the Agency was under no
obligation to defer to an offeror’s technical plan. Rather, it could find an offeror’s plan to be
unsatisfactory or less desirable than other proposed plans, as it did here. Indeed, just as Red
Cedar was free to propose its own “unique” approach to the Agency’s needs, the Agency was
equally free to reject that approach as insufficient. The burden was on Red Cedar to present an
approach that would be satisfactory to the Agency. See Software Eng’g Servs., Corp. v. United
States, 85 Fed. Cl. 547, 555 (“Offerors carry the burden of presenting ‘an adequately written
proposal, and an offeror’s mere disagreement with the agency’s judgment concerning the
adequacy of the proposal is not sufficient to establish the agency acted unreasonably.’”) (quoting

                                                17
United Enter. & Assocs. v. United States, 70 Fed. Cl. 1, 26 (2006)); see also id. (“[The bidder]
was required to demonstrate its capabilities within the proposal.”). Accordingly, the Agency’s
evaluation found that NextGen’s proposal was more technically competent and offered less risk
to the government. This is made apparent by the evolution of NextGen’s proposal from its initial
evaluation to the final submission. Like Red Cedar, NextGen was also presented with a number
of notices related to its staffing and management approach. See, e.g., AR 22-2516 to 18. But
instead of either ignoring or responding with significant unexplained changes at the last minute
to these notices, NextGen improved and refined its bid with each submission. Compare AR 22-
2516 to 18 (38 positions required adjustment in NextGen’s initial proposal), with AR 36-3503 to
04 (3 positions required adjusting after NextGen’s first revised proposal).

        Red Cedar appeared to underestimate the requirements of the contract for the first two
rounds of evaluation. Its initial and first revised proposal were significantly below the
government’s and NextGen’s estimated numbers. The government found as much when it noted
Red Cedar lacked an “understanding of the magnitude of the [contract] requirements.” AR 35-
3484. Only in response to the government’s concerns after the second (and final) round of
discussion did Red Cedar substantially increase its hours and refine its staff and management
approach. Compare AR 19-2473 and AR 35-3470, with AR 61-5138 (showing a 24% and 15%
increase in hours and proposed cost from initial to revised proposal, but a 368% and 244%
increase in hours and proposed cost from the revised proposal to the final proposal). But it did
so in a scattered manner, disregarding the notices provided by the Agency and failing to
adequately explain the changes in its proposal that increased the total hours from its initial
proposal by [***]. The Agency then added back the hours it believed were still required to
complete the contract based on Red Cedar’s approach. Thus, based on the record before the
court, the court cannot find that the Agency lacked a rational basis in its evaluation of Subfactor
6.

       As the court finds the adjustments to Red Cedar’s proposed hours to be reasonable, it
correspondingly finds the increases to its total price to be reasonable as well.

 II.   The Agency’s Evaluation of Subfactor 3 and Failure to Assign Red Cedar a
       Strength for Subfactor 6

         “It is hornbook law that agencies must evaluate proposals and make awards based on the
criteria stated in the solicitation.” Banknote Corp. of Am., Inc. v. United States, 56 Fed. Cl. 377,
386 (2003), aff’d, 365 F.3d 1345 (Fed. Cir. 2004). That said, “it is well-settled that ‘a
solicitation need not identify each element to be considered by the agency during the course of
the evaluation.’” Id. at 387 (quoting Analytical & Research Tech., Inc. v. United States, 39 Fed.
Cl. 34, 45 (1997)) (other citations omitted). Further, the agency has “great discretion in
determining the scope of an evaluation factor.” NEQ, LLC v. United States, 88 Fed. Cl. 38, 48
(2009) (quoting Forestry Surveys & Data v. United States, 44 Fed. Cl. 493, 499 (1999) (citation
omitted)).

        For a protester to prevail on a claim for unequal treatment and the use of unstated
evaluation criteria, it must demonstrate: “(i) the procuring agency used a significantly different
basis in evaluating the proposals than was disclosed; and (ii) the protester was prejudiced as a
result – that it had a substantial chance to receive the contract award but for the error.” Banknote
Corp., 56 Fed. Cl. at 386-87 (emphasis added). And, to demonstrate the use of a significantly

                                                 18
different basis for evaluation, the protester must find appropriate support in the administrative
record. See Iron Bow Techs., LLC v. United States, 136 Fed. Cl. 519, 535-36 (2018) (dismissing
a protest based on unequal treatment where the protestor “points to no evidence in the record to
support its claim.”); see also Academy Facilities Mgmt. v. United States, 87 Fed. Cl. 441, 470-71
(2009) (finding that “the [agency] did not use a ‘significantly different basis’ in evaluating
proposals than was disclosed,” when it assigned the awardee a strength for specified experience
that was not “explicitly require[d]” by the solicitation.) (citing Banknote Corp., 56 Fed. Cl. at
387).

                            A. The Agency’s Evaluation of Subfactor 3

        Red Cedar argues that the Agency committed procurement error by evaluating Red Cedar
and NextGen for Subfactor 3 “against criteria from other subfactors.” Pl.’s Mot. at 29.
Specifically, Red Cedar contends that NextGen unfairly received a “strength” under Subfactor 3
based on its proposal’s inclusion of “[***].” Id. at 30 (citing AR 60-5109). According to Red
Cedar, “[a]utomation was not a requirement of Subfactor 3, nor was it set out as an evaluation
criterion. Instead, automation was to be evaluated under Subfactor 5.” Id. (citing AR 4-365 to
66). Red Cedar also avers that the rationale given for NextGen’s strength in Subfactor 3
“mirror[ed]” that given to it for Subfactor 5. Id.

        The government counters that the Agency is permitted “broad discretion over its
technical evaluations.” Def.’s Cross-Mot. at 37. And that in this instance, the Agency used that
discretion to find “Nextgen’s proposal to use [***].” Id. (quoting AR 60-5109 to 10). In effect,
the decision to include automation for Subfactor 3 was a choice, and one made by NextGen but
not by Red Cedar.

         In this case, nothing in the administrative record indicates that the Agency used a
different basis, much less a significantly different basis, in evaluating the proposals. The two
subfactors were distinct components of the Solicitation. For Subfactor 3, the Solicitation stated
that “[t]he [g]overnment will evaluate the offeror’s proposed approach to testing software builds
for the various releases for all test events in a classified environment.” AR 3-166. For Subfactor
5, the Solicitation stated “[t]he [g]overnment will evaluate the offeror’s proposed approach to
incorporate automated software test tools and associated processes in order to enhance the
overall [command and control] [p]ortfolio testing processes, methods, and tools currently used.”
AR 3-166 to 67 (emphasis added). Accordingly, Subfactor 5 was a broad category, where the
offeror proposed automation that would improve the “overall” performance of the portfolio.
Subfactor 3, by contrast, was narrower and focused on testbed event support. It is reasonable,
then, that an offeror would want to specifically highlight its ability to automate certain processes
within the testbed environment. And it is within the Agency’s discretion to rate the offeror
higher for proposing a feature that exceeds the requirement.

        Further, nothing in the solicitation prevented the Agency from considering automation as
an additional component of Subfactor 3. See AR 3-166 to 67. Contrary to Red Cedar’s
assertion that “automation was to be evaluated under Subfactor 5,” Pl.’s Mot. at 30, there was no
requirement in the solicitation that all automation aspects of a bid must be in Subfactor 5. See
AR 3-166 to 67. Rather, the solicitation was designed to marshal the creativity of the offerors to
address the needs of the Agency. See, e.g., AR 3-154 (“Proposals should clearly show how the
offeror will accomplish the tasks and why they have chosen that particular course.”); AR 4-353,

                                                19
369 (asking for “unique” proposals). It was up to the offerors to decide how to meet those needs.
NextGen decided to [***] into Subfactor 3, see AR 53-4388, AR 60-5109, while Red Cedar did
not, see AR 54-4653 to 54, AR 62-5160. The Agency was within its discretion to award a
strength to an offeror’s proposal that “[***].” AR 60-5110. Red Cedar’s constrained
understanding of the solicitation, and subsequent failure to clarify with the Agency, does not
transform the issue into one of unequal treatment or the use of unstated criteria.

       Red Cedar also briefly argues that “the Agency did not consider [Red Cedar’s
automation] aspects . . . under Subfactor 3,” and that “[h]ad the Agency done so, [Red Cedar]
would have received the same rating as NextGen under Subfactor 3.” Pl.’s Mot. at 31. Red
Cedar, however, provides no evidence to support this claim, within the administrative record or
otherwise. Nothing in its proposal indicates it provided the same type of automation abilities for
Subfactor 3 as NextGen. Thus, as there is no support in the administrative record, see, e.g., Iron
Bow, 136 Fed. Cl. at 535-36, the court finds that the Agency did not treat the two bidders
unequally or use an unstated evaluation criterion with regards to Subfactor 3.

                           B. Failure to Award a Strength for Subfactor 6

        Red Cedar also contends that the Agency treated it unequally when it assigned a strength
to NextGen for its management/staffing approach. Pl.’s Mot. at 26-28. According to Red Cedar,
it too deserved a strength for Subfactor 6, as its proposal was substantially similar to NextGen’s.
Id. Red Cedar provides a litany of qualifications that its personnel possessed and states it
“[***].” Id. at 26. The inclusion of these individuals, according to Red Cedar, “[o]bviously”
demonstrated its ability to perform the command and control aspects of the contract. Pl.’s Reply
at 14.

        The entire solicitation, however, was based on the command and control portfolio. See,
e.g., AR 3-58 (“The Command and Control [] Test and Evaluation [] Contract.”). The Agency
found providing the names of “[***],” Pl.’s Mot. at 26 (emphasis added), and the resumes of
[***], Pl.’s Reply at 14, was insufficient to warrant a strength under Subfactor 6 in its technical
evaluation. This court has held that being an incumbent alone does not demonstrate capability to
perform. See, e.g., Software Eng’g Servs., 85 Fed. Cl. at 555 (citing Int’l Res. Recovery, Inc. v.
United States, 60 Fed. Cl. 1, 6 (2004), and PGBA, LLC v. United States, 60 Fed. Cl. 196, 209-10
(2004), aff’d, 389 F.3d 1219). Red Cedar’s proposal did identify all “key personnel,” but the
Agency found that in NextGen’s proposal, “[***].” AR 60-5116. The Agency judged
NextGen’s proposal, [***], to be more technically sound and worthy of a strength. The court
therefore finds that the Agency had a rational basis for awarding NextGen a strength in Subfactor
6 and not awarding a strength to Red Cedar.

                        III.   The Agency’s Source Selection Decision

        As the court finds the Agency’s decision to increase the total number of hours in, and the
price of, Red Cedar’s proposal to have a rational basis, and because it finds that DISA did not
treat the two offerors unequally, its decision to select NextGen was reasonable. NextGen was
the lowest priced and highest technically rated proposal and the Agency properly determined it
represented the best value for the government.



                                                20
                                       CONCLUSION

       For the foregoing reasons, Red Cedar’s motion for judgment on the administrative record
is DENIED and the government’s and NextGen’s cross-motions for judgment on the
administrative record are GRANTED. The clerk shall enter judgment accordingly.

       No costs.

       It is so ORDERED.


                                                   s/ Charles F. Lettow
                                                   Charles F. Lettow
                                                   Senior Judge




                                              21
