              In the United States Court of Federal Claims
                                   Nos. 13-87 C & 13-91 C

                                     (Filed May 6, 2013)1

 * * * * * * * * * * * * * * * *
 CGS ADMINISTRATORS, LLC,          *
 and PALMETTO GBA, LLC,            *
                                   *
             Plaintiffs,           *
                                   *
          v.                       *                Post-Award Bid Protest; Best
                                   *                Value Award Not Shown To Be
 THE UNITED STATES,                *                Arbitrary, Capricious, an Abuse
                                   *                of Discretion, or Contrary to
             Defendant,            *                Law.
                                   *
 NORIDIAN ADMINISTRATIVE           *
 SERVICES, LLC,                    *
                                   *
             Intervenor-defendant. *
 * * * * * * * * * * * * * * * *

     Craig A. Holman, Washington, DC, for plaintiff CGS Administrators, LLC.
Kara L. Daniels, Michael E. Ginsberg, William S. Speros, and Dana E. Peterson,
Washington, DC, of counsel.

     W. Jay DeVecchio, Washington, DC, for plaintiff Palmetto GBA, LLC.
Daniel E. Chudd, Kevin C. Dwyer, Edward Jackson, Ethan E. Marsh, and Adam G.
Unikowsky, Washington, DC, of counsel.




       1
         / This opinion was issued under seal on April 17, 2013. Pursuant to ¶ 4 of the ordering
language, the parties were invited to identify source selection, proprietary or confidential
material subject to deletion on the basis that the material was protected/privileged. Brackets ([ ])
identify the redacted portions of this opinion.
      Tara K. Hogan and Alexis J. Echols, United States Department of Justice,
with whom were Stuart F. Delery, Principal Deputy Assistant Attorney General,
Jeanne E. Davidson, Director, and Reginald T. Blades, Jr., Assistant Director,
Washington, DC, for defendant. Brian Hildebrandt and Christian Maimone,
Office of General Counsel, United States Department of Health and Human
Services, Baltimore, MD, of counsel.

     Paul F. Khoury, Washington, DC, for intervenor-defendant. Daniel P.
Graham, Kathryn Bucher, Tracye Winfrey Howard, Brian G. Walsh, Craig Smith,
and Katherine R. McDonald, Washington, DC, of counsel.

                         ________________________________

                              OPINION AND ORDER
                         ________________________________

Bush, Judge.

       This post-award bid protest is before the court on cross-motions for
judgment on the administrative record filed under Rule 52.1(c) of the Rules of the
United States Court of Federal Claims (RCFC). CGS Administrators, LLC (CGS)
and Palmetto GBA, LLC (Palmetto) challenge the award of Contract No. HHSM-
500-2012-M0012Z to Noridian Administrative Services, LLC (Noridian) by the
Centers for Medicare and Medicaid Services, United States Department of Health
and Human Services (CMS or agency). Plaintiffs’ complaints were filed on
February 1, 2013 and the two cases were consolidated on February 5, 2013. An
administrative record (AR) was filed on February 8, 2013 and was subsequently
corrected and amended on February 14, 2013 and February 21, 2013. The parties’
cross-motions have been fully briefed according to an expedited briefing schedule;
oral argument (Tr.) was held on April 2, 2013. For the reasons discussed below,
plaintiffs’ motions are denied and the government’s and Noridian’s motions are
granted.2

       2
         / This opinion reviews a massive administrative record of more than 26,000 pages and
briefs which exceeded the page limits set forth in this court’s rules. Plaintiffs’ arguments were
detailed and multi-faceted. The court has attempted to give each substantive argument its due
measure. Plaintiffs’ less substantive arguments, which were omitted from this opinion for the
                                                                                         continue...

                                                 2
                                      BACKGROUND

I.     Procurement Summary

       A.      Medicare Administrative Contractor for Jurisdiction E

       “CMS historically . . . used private insurance companies to process claims
and perform related administrative services for Medicare beneficiaries and health
care providers . . . .” AR at 128. More recently, however, these tasks have been
performed, on a regional “jurisdictional” basis, by Medicare administrative
contractors (MACs). Id. at 128-29. The region at issue here is “Jurisdiction E,”
formerly known as Jurisdiction 1 (J1), and is composed of “California, Nevada,
Hawaii, and the Pacific Islands of American Samoa, Guam, and Northern
Mariana.” Id. at 129. The particular services competed in this procurement are
referred to as “claims benefit administration services in support of Medicare Part A
and Part B claims for Jurisdiction E.” Id.

        The tasks to be performed by the MAC chosen through this procurement
include “all core claims processing operations,” “[m]edical review and local
provider education and training,” the “Provider Customer Service Program
(PCSP),” the “Medicare secondary payer (MSP) program,” “[a]ppeals other than
initial claims determinations,” and “[p]rovider oversight.” AR at 129. CMS has
determined that the appropriate contracting vehicle for this MAC contract is a
cost-plus-award-fee (CPAF) contract. Id. at 132-34. The government estimate for
the cost of the contract, including option years, is $506,869,115. Id. at 130.
Palmetto is the incumbent MAC for these services in this region.

       B.      The Solicitation

      The agency issued Solicitation No. CMS-2011-0026 (solicitation or RFP) on
December 5, 2011. AR at 135. Proposals were due by January 25, 2012. Id. Two
amendments to the solicitation were issued. AR Tabs 24-25. Only three offers,
from CGS, Palmetto and Noridian, were received in response to the solicitation.



       2
        / ...continue
sake of brevity, were considered but rejected as unpersuasive.

                                                3
Id. at 13853. The resulting contract is to include a one-year base period, four
option years, and a six-month option for a close-out period. Id. at 1315.

      Award would be based on the written proposals submitted by the offerors, as
supplemented by oral presentations and by past performance information available
to CMS. AR at 2084-85, 2115-17. Offerors were given notice that award might be
made without discussions, through a reference to FAR 52.215-1, 48 C.F.R.
§ 52.215-1 (2011).3 Id. at 2052, 13854. The anticipated contract award date was
June 10, 2012. Id. at 2063.

       C.      Evaluation Criteria

               1.     Overview

     Section M of the solicitation outlines the agency’s proposal evaluation
scheme. CMS would choose the “best value” proposal:

               Award will be made to the Offeror(s) whose proposal
               offers the best overall value to the Government. This
               will be determined by a trade-off technique allowing the
               Government to consider award to other than the lowest
               cost Offeror or other than the highest technically rated
               Offeror in accordance with FAR Part 15.101-1. This
               process permits tradeoffs between cost and non-cost
               factors and allows the Government to accept other than
               the lowest priced proposal.

AR at 2111.

       Offerors were on notice that non-cost factors, when combined, were
“significantly more important than cost or price.” Id. The agency would consider
the reasonableness and the realism of the costs proposed by the offeror; the cost
realism analysis would determine whether


       3
         / All further references to the Federal Acquisition Regulation (FAR) are to the 2011
edition of Title 48 of the Code of Federal Regulations, the version of the FAR applicable to the
procurement actions reviewed here.

                                                4
             estimated proposed cost elements are realistic for the
             work to be performed, reflect a clear understanding of the
             requirement and are consistent with the unique methods
             of performance and materials described in the Offeror’s
             technical proposal.

Id. An offeror’s proposed costs, pursuant to FAR 15.404-1(d)(2), would be
corrected to reflect probable costs predicted by the agency’s cost realism analysis,
where needed. AR at 2111; see also FAR 15.404-1(d)(2). Probable costs would
be the costs used in the agency’s “best value” analysis. FAR 15.404-1(d)(2)(i).

             2.    Evaluation Factors and Aspects

       The agency formed a Technical Evaluation Panel (TEP) to “evaluate and
rate each technical proposal (including the written technical proposal and Oral
Presentation, inclusive of the oral presentation slides) by applying the weighted
Technical Evaluation Factors set forth” in the solicitation. AR at 2115. The first
of the three technical evaluation factors, weighted at 40%, was past performance:

             The evaluation of past performance will be based on the
             Offeror’s demonstrated ability, (considering its
             performance of the requirements of other contracts of a
             similar nature, scope, and complexity as the MAC
             contract) to successfully meet the requirements of the
             [Statement of Work (SOW)] in this solicitation.

Id. The second of the three technical evaluation factors, also weighted at 40%, was
technical understanding, which was reflected in the offeror’s approach to
performing the tasks required by the contract. Id. The third technical evaluation
factor, weighted at 20%, was the offeror’s approach to implementation activities
required to begin performance of the contract services. Id. at 2092, 2115.

       The rating for each of the evaluation factors listed above would be the result
of the TEP’s consideration of identified findings (within each evaluation area), and
further consideration of the following four “aspects” of an offeror’s proposal:
(1) customer service; (2) financial management; (3) operational excellence; and,
(4) innovations and technology. AR at 2116. The relationship between technical


                                          5
evaluation factors and technical evaluation aspects is summarized in the
solicitation:

            In performing its review of each evaluation factor (Past
            Performance, Technical Understanding and
            Implementation), CMS will consider all of its findings in
            relation to one or more of the . . . four aspects. These
            aspects are not factors or subfactors. They are elements
            for CMS’ consideration in evaluating the three factors,
            Past Performance, Technical Understanding and
            Implementation . . . .

Id. (emphasis added). In the court’s view, the evaluation factors provide a useful
tool for dividing an offeror’s proposal into three evaluation areas, whereas the
evaluation aspects offer an additional measure relevant to these evaluation areas
which may identify an offeror’s positive organizational attributes. See id. (stating
that each of the four technical evaluation “aspects” may be considered by CMS as a
measure of a contractor’s abilities).

II.   Evaluation of Proposals

      A.    Overview of Proposal Evaluations by the Technical Evaluation
            Panel (TEP) and the Business Evaluation Panel (BEP)

      The Source Selection Plan (SSP) was approved on December 16, 2011. AR
at 2292. This document describes thorough processes for examining proposals and
evaluating these proposals according to the criteria identified in the solicitation.
There is no real dispute that CMS followed these procedures – for this reason, the
court will not discuss the SSP in detail. It is important to distinguish, however,
various roles established by the SSP.

       The TEP produced the offerors’ ratings for past performance, technical
understanding and implementation. The process followed by the TEP identified,
for each proposal, any strengths, weaknesses, and significant weaknesses related to
these evaluation factors, and also produced each offeror’s numeric score on each of




                                         6
the three evaluation factors.4 The BEP, on the other hand, had a different function.
Rather than produce an evaluation rating, the BEP closely examined the costs of
the services proposed by each offeror. The BEP would determine whether the
offeror’s proposed cost elements were reasonable and realistic, and whether the
technical and cost proposals of an offeror were consistent.

       Along with the TEP and the BEP, there were various other players in the
evaluation process, such as the Technical Cost Analyst (TCA), and various staff at
CMS that served as Subject Matter Experts (SMEs). The SMEs, in particular,
played an advisory role, and their comments might be adopted by the TEP or the
BEP, if deemed appropriate. AR at 2281, 5171. The results of the deliberations of
the TEP were presented in the TEP report, which was signed on September 10,
2012. Id. at 5207. The results of the deliberations of the BEP were presented in
three reports, one for each offeror, signed on September 13, 2012. Id. at 10996,
11280, 11834.

      B.     TEP Ratings for the Offerors’ Proposals

             1.     The Rating Scale

       The rating scale adopted by the agency had a primary purpose of considering
particular findings of strengths, weaknesses and significant weaknesses and
expressing those findings in a numerical rating for each of the three evaluation
factors. AR at 2293 (“The purpose of the Numeric rating methodology is to assess
the value of Offeror proposal claims or promises through review of all pertinent
findings (strengths, weaknesses, significant weaknesses and
deficiencies), as defined below, as they apply to the evaluation factors.”). The
ratings would reflect the evaluators’ prediction of each offeror’s potential for
“success” as the MAC for Jurisdiction E, on a scale that ranged from a lowest score
of .1 to a highest score of .9:

             .6 to .9 - More Likely to succeed than fail, with .9 being
             the highest possible score;

              .5 - Equally likely to succeed or fail;


      4
       / None of the proposals were judged to contain deficiencies. AR at 5177.

                                             7
             .1 to .4 - More likely to fail than to succeed, with .1
             being the lowest possible score.

Id. (formatting altered).

       The court notes, first, that there is nothing inherently suspect about a scale
that measures potential for successful performance in tenths or hundredths of a
point. See CGS Mot. at 36-37 & n.17 (implying that the agency’s rating scale
suffered from “‘false precision’” because it used tenths and hundredths of a point
to distinguish ratings received by the offerors). There is no meaningful difference
between such a scale and a 100 to 900 point scale. One weakness of the scale
adopted by the agency, however, is that the central band of equipoise between
success and failure is not defined as a range but as a simple point (.5), with some
ambiguity as to the interpretation of a rating of .41 or .59 (is the former closer to
the lower band of the scale, or to .5?; similarly, is the latter closer to the upper band
of the scale, or to .5?). Notwithstanding this regrettably murky aspect of the rating
scale, the court finds nothing irrational in the evaluation rating scheme.

       Once the TEP had produced a rating for each of the evaluation factors for
each offeror, the weighted evaluation factor ratings would produce a final technical
rating for the offeror’s proposal. AR at 2293 (“Each factor will then have its rating
or score multiplied by the weight assigned to that factor (Past Performance - 40%;
Technical Understanding - 40%; Implementation - 20%), the three factor scores
will then be added together to provide a final score for the proposal.”). For two of
the three evaluation factors, technical understanding and implementation, this
summary of the evaluation process adequately explains the process followed to
establish the ratings received by the three offerors. For the past performance
evaluation factor, however, the rating scheme had some additional steps which are
the focus of much debate among the parties.

             2.     The Past Performance Baseline Score and Final Score
                    Evaluation Scheme

      The three offerors are well-known to CMS, in that they have been
contractors with the agency since 1966. AR at 3157, 4573, 5048. Each of the
offerors has current experience as a MAC. Id. at 5180, 5195, 5202. There was a
wealth of information regarding the offerors’ past performance before the agency,
and the court will only discuss this information in the most general terms.

                                           8
       The court must reject plaintiffs’ challenges to the past performance
evaluation scheme adopted by the agency,5 for the simple reason that the court
finds nothing irrational in the past performance evaluation methodology adopted
by CMS. Plaintiffs suggest that the evaluation scheme could have been improved
to produce more accurate ratings. In essence, plaintiffs ask the court to substitute
some other reasonable evaluation scheme (or schemes) for the one adopted by the
agency. Plaintiffs’ burden, however, is to show that the past performance
evaluation scheme employed by CMS was arbitrary. See infra. The court
addresses plaintiffs’ primary challenges to the past performance evaluation scheme
here, as it describes the methodology employed by the TEP.

       First, CMS chose to give greater consideration, and more weight, to the most
relevant types of past performance. More recent experience (i.e., the last three
years) was also the focus of the agency’s past performance evaluation, both of the
offeror and of any significant subcontractors. AR at 5172. In the court’s view, this
approach evidences a focus on general trends in performance, as required by FAR
15.305(a)(2)(i).6 The agency’s scoring scheme deliberately awarded greater weight
to two types of performance indicators (“Contractor Performance Assessment
Rating System (CPARS) reviews, National Institutes of Health (NIH) evaluations”)
than to a third type (“Award Fee evaluations”). AR at 5172. Furthermore, for the
CPARS reviews, the agency concentrated on just four of six evaluative ratings
received in those CPARS reviews. Id. at 5176-77. All of these decisions are
rational and well within the discretion of the agency in conducting a past
performance evaluation of the offerors responding to a solicitation.



       5
        / Plaintiffs’ challenges to the past performance ratings received by the offerors are
discussed in the analysis section of this opinion, infra.
       6
        / The court has found no authority requiring that a procuring agency develop a past
performance evaluation scheme which discerns and assigns a particular, quantified weight to
upward or downward trends in an offeror’s past performance. The regulation cited by Palmetto,
FAR 15.305(a)(2)(i), merely requires that the agency consider “general trends” in the
contractor’s past performance. That standard has been fully met here. The agency averaged
highly relevant past performance ratings received during the most recent three years of contract
performance, and then performed a substantive analysis of the entire body of past performance
source documents before the agency. These analyses accounted for the general trends in past
performance for each of the offerors.

                                                9
       Second, the agency used the three principal past performance indicators
(CPARS, NIH and Award Fee) to calculate what it termed a baseline numeric past
performance rating. AR at 5175 (“The baseline numeric score provided the TEP a
framework to weigh the different adjectival ratings assigned in each formal
performance evaluation report for all Offerors . . . .”). The baseline score
calculation for each offeror began with the averaging of Award Fee determinations,
and a subsequent averaging of the (one) average Award Fee rating with the
CPARS and NIH ratings. Id. This baseline score was then adjusted, up or down,
to reflect the TEP’s consideration of other past performance information, as well as
information contained in the narrative sections of the CPARS, NIH and Award Fee
determinations. Id. at 5175-77. Thus, there could be a significant difference
between the baseline and final score for the past performance of each offeror. This
aspect of the evaluation scheme for past performance is also rational and within the
agency’s discretion.7

       Third, CMS utilized a conversion mechanism for the three principal past
performance indicators, so that adjectival ratings in the CPARS, NIH and Award
Fee determinations could be combined, mathematically, to produce the baseline
numeric past performance score for each offeror. Although plaintiffs strive
mightily to convince the court that the conversion scheme was illogical in its
assignment of certain numerical ratings to certain adjectival ratings, these
arguments are not persuasive. The record shows that the conversion scheme, set
forth in a chart in the TEP report, AR at 5173-75, was applied uniformly to each
offeror’s past performance data and that the conversion scheme conveyed
meaningful distinctions in the baseline numeric past performance ratings of the
offerors. To the extent that plaintiffs argue that the TEP baseline numeric past
performance scores do not carry the same meaning as the adjectival ratings in
certain source documents, there is no requirement that there be perfect symmetry
between one particular past performance indicator and an agency’s quantitative



       7
         / It is of no import that individual data points for a particular offeror might have a lesser
or greater impact on the calculation of the baseline numeric score, or that a particular offeror’s
constellation of data points permitted a slightly lower or slightly higher maximum baseline score.
The baseline score provided a fair framework for the analysis of each offeror’s ratings on the
most highly relevant past performance information available to the agency, and the baseline
score could be adjusted by the TEP upward or downward to reflect the information contained in
all of the past performance source documents before the agency.

                                                 10
(and preliminary) rating of an offeror’s past performance.8 The court finds that the
agency’s past performance evaluation scheme, as a whole, was rational and within
the agency’s discretion.

               3.     The TEP Ratings for CGS, Palmetto and Noridian

       Turning first to the past performance evaluation factor, Palmetto received
the lowest scores in this area: its Award Fee average score was .55; its baseline
score was .58, and its final score was .4. AR at 5190. CGS fared better in its past
performance ratings: its Award Fee average score was .7; its baseline score was
.66, and its final score was .7. Id. at 5205. Noridian had the best past performance
ratings among the offerors: its Award Fee average score was .7; its baseline score
was .82, and its final score was .87. Id. at 5198.

       As for the technical understanding evaluation factor, CGS received the
lowest score (.6), Palmetto was more highly rated (.7), and Noridian received a
perfect score (.9). AR at 5206. For the implementation factor, Noridian received
the lowest rating (.7), CGS received a higher rating (.8), and Palmetto, the
incumbent, received a perfect rating (.9). Id. Once these ratings were weighted to
produce the final technical evaluation score for each offeror, Noridian was well in
the lead at .85, followed by CGS at .68, with Palmetto trailing at .62. Id.

       The TEP concluded that:

               After reviewing and evaluating the proposals, in
               accordance with the evaluation factors and aspects as
               described in the RFP, the TEP determine[d] that Noridian
               Administrative Services emerged as having proposed a
               better technical approach to both the general and the
               Jurisdiction E-specific workload requirements than the
               other two Offerors. The Panel also agreed that this same
               proposal demonstrated more realistic technological

       8
         / Palmetto also contends that the agency’s past performance evaluation was irrational
because Palmetto’s CPARS ratings are much better than its final past performance score. Tr. at
44-45. The TEP, however, could rationally assign an offeror a lower past performance rating if
the distinction between the TEP’s past performance rating and the CPARS ratings is adequately
explained in the record. Here, the TEP report contains a rational explanation for that distinction.
See AR at 5184-90, 5250-67.

                                                11
             innovations that would lead to both improved customer
             service for providers and administrative savings to the
             Medicare program.

AR at 5206. The court now turns to the evaluation of the offerors’ costs by the
BEP.

      C.     Overview of Adjustments to Each Offeror’s Cost Elements

       The BEP reports on the offerors’ price proposals reflect, in the court’s view,
a rational and well-documented evaluation of the costs proposed by the offerors.
According to each of these reports, “the business proposal evaluation consist[ed] of
a combination of price analysis, cost analysis, and cost realism analysis of the
Offeror’s cost proposal.” AR at 10957, 11216, 11773. Plaintiffs do not challenge
the type of analysis conducted by the BEP, but instead challenge certain cost
realism adjustments to the offerors’ proposed cost elements. These challenges will
be addressed in the analysis section of this opinion.

       For each offeror, there are three dollar figures which summarize the work of
the BEP. First, there is the figure representing the offeror’s proposed costs for the
contract, which is derived from the offeror’s cost proposal for the base period and
all option periods of the contract, including award fees. For CGS this figure is
$344,515,427, for Noridian this figure is $345,206,187, and for Palmetto this
figure is $392,324,975. AR at 10961, 11220, 11777. The court notes that even
without the cost adjustments produced by the BEP, the offerors’ price proposals
identify CGS as the lowest-cost offeror, closely followed by Noridian, with
Palmetto a distinctly more expensive competitor.

       The second summary figure for each offeror is the total cost realism
adjustment to the offeror’s cost proposal produced by the BEP, which summarizes
a number of individual adjustments to particular cost elements. This figure
reflects, in the view of the BEP, the total amount needed to adjust, upward, an
offeror’s proposed costs to more accurately estimate that offeror’s probable costs
over the life of the contract. For CGS this adjustment figure is $26,716,738, for
Noridian this figure is $27,839,554, and for Palmetto this figure is $15,856,240.
AR at 10961, 11220, 11777.



                                         12
       The third summary figure for each offeror is the total of its proposed costs
(the first figure discussed supra) and the BEP’s total cost adjustment amount (the
second figure discussed supra); this total figure represents each offeror’s probable
costs over the life of the contract. For CGS this total probable costs figure is
$371,232,165, for Noridian this figure is $373,045,741, and for Palmetto this
figure is $408,181,215. AR at 10961, 11220, 11777. The court notes that after the
cost adjustments produced by the BEP, the offerors are still in the same position as
regards the price competitiveness of their proposals: the offerors’ probable costs
identify CGS as the lowest-cost offeror, closely followed by Noridian, with
Palmetto a distinctly more expensive competitor.

       The court observes that it is these adjusted, probable costs for each offeror
that were used in the best value award decision of the contracting officer. AR at
13880, 13884-85. This approach is consistent with both the RFP and the FAR.
See AR at 2111; see also FAR 15.404-1(d)(2)(i). The court notes, further, that the
price premium paid by the agency to choose Noridian over CGS is, in relative
terms, extremely small. The difference in probable costs between Noridian and
CGS is only 00.49% – less than one-half of one percent. The court turns next to
the source selection decision memorandum, which presents the contracting
officer’s rationale for her best value award of the contract to Noridian.

III.   Best Value Award Decision

       A.    Overview

       The contracting officer served as the source selection authority (SSA) for
this procurement. AR at 2272. The record of her award decision is presented, in
large part, in a sixty-page document titled “Source Selection Authority Decision
Memorandum,” dated September 20, 2012. AR Tab 62. This source selection
memorandum (SSM) is extremely thorough and well-reasoned; indeed, the detailed
analysis contained in this document is more coherent and comprehensive than the
analyses that this court typically encounters in source selection documents.9




       9
        / Bid protests of MAC contract awards appear to be common. See, e.g., Noridian
Admin. Servs., LLC, B-401068.13, 2013 CPD ¶ 52, 2013 WL 427848, at *1-*2 (Comp. Gen. Jan.
16, 2013) (describing several protests that followed the award of a MAC contract).

                                           13
       The SSM also incorporates the TEP report and the BEP reports, and the SSA
largely adopts the findings and recommendations in these reports. AR at 13858-
59, 13880. The parties briskly debate the question of whether the SSA relied upon
various numeric scores used by the TEP as a framework for comparing proposals.
The court concludes that the SSA relied upon numeric scores, and upon other
findings by the TEP, to a great extent, with three caveats. First, the SSA clearly
acknowledged that the evaluation of proposals by the TEP was more qualitative
than quantitative. Id. at 13856. As a result, the assignment of strengths and
weaknesses to proposals did not mechanically correlate to a particular rating on the
rating scale for the evaluation factors, either in the case of the TEP report or in the
case of the SSM. Id.

       The second caveat concerns the adjustment of baseline numeric scores for
past performance to final evaluation ratings for past performance. The SSM
clearly describes the baseline numeric scores as providing a baseline score, not a
final score. AR at 13858. Thus, the SSA’s reliance on baseline scores is a
nuanced one, because adjustment of the baseline scores was contemplated before a
final rating could be established for each offeror on the past performance
evaluation factor.10 The SSA relied on baseline numeric scores for past
performance, but only as these scores provided a framework for further evaluative
scoring of past performance.

        The final, and perhaps most important caveat, is that the SSA exercised her
independent judgment and did not expressly adopt every finding, rating and
recommendation of the BEP and the TEP.11 AR at 13853, 13859, 13880. In
particular, the SSA took into account information that was not available during
initial TEP deliberations and discussed the impact of such information on her
award decision. See, e.g., AR at 13904-05. Thus, although the SSA relied to a
great extent on the findings and ratings produced by the TEP, the SSA’s reliance
was not absolute; rather, the SSA’s award decision was informed not only by her

       10
          / To the extent that the SSA’s brief recitation of the information considered by the TEP
as it adjusted baseline numeric scores for past performance differs from the more complete
description of this process in the TEP report, the court, following guidance provided in the SSM,
relies on the TEP report. See AR at 13858 (“A complete description of the Past Performance
evaluation, and the methodology used, is contained within Section II of the TEP Report.”).
       11
         / The parties have not noted any instances where the SSA and the BEP disagreed as to
a particular cost realism analysis finding.

                                                14
independent judgment but also by information that was not discussed in the TEP
report.

      B.     Trade-off and Best Value Award Rationale

      The SSA adopted the evaluation ratings produced by the TEP and concluded
that Noridian offered the proposal with the highest technical ranking. AR at
13890. She also adopted the BEP’s adjustments to the offerors’ proposed costs.
Id. When the offerors’ probable costs were considered, CGS was the lowest-cost
offeror, with Noridian a close second as to price. Id. A best value trade-off
analysis, analyzing the relative advantages of the proposals submitted by CGS and
Noridian, was then conducted by the SSA. Id. Although Palmetto’s higher costs
rendered a trade-off analysis of Noridian’s and Palmetto’s proposals unnecessary,
the SSA nonetheless included a full comparative analysis of all three proposals. Id.
at 13891.

       The SSA noted that in this procurement “all evaluation factors other than
cost or price, when combined are significantly more important than cost or price.”
AR at 13891. She further noted that Noridian’s “proposal represents the best
overall value to the government as demonstrated by [its] superior technical rating.”
Id. The SSA concluded that the strengths of Noridian’s proposal outweighed the
cost advantage of the proposal submitted by CGS. Id.

      The SSM contains a lengthy and detailed analysis of the proposals submitted
by CGS, Noridian and Palmetto, and discusses each proposal in light of the
evaluation factors and aspects set forth in the RFP. At the conclusion of this
analysis, the SSA stated that:

             The [Noridian] and CGS proposed costs were
             competitive, with [Noridian’s] proposed costs being
             $690,760 higher than CGS proposed cost; and after cost
             realism adjustments, the [Noridian] costs were
             $1,813,576 higher than the CGS costs. However based
             on the [Noridian] consistent historical innovations and its
             ability to experience contract cost lower than the
             negotiated cost, I am confident that [Noridian] will bring
             the same superior performance and commitment to
             technological advancements and consequent cost savings

                                         15
             to the [Jurisdiction E] contract as it has done with its JD
             and J3 contracts. The anticipated cost savings
             notwithstanding, technical ability is rated as the most
             important factor, and the SSA is willing to pay a small
             cost premium for a clearly superior technical approach.

AR at 13911. The SSA also noted that the TEP and the BEP “evaluations were
comprehensive, objective and fair.” Id. Because Noridian’s “proposal
represent[ed] the best overall value for CMS . . . [and was] determined to have no
significant weakness or technical or cost issues requiring discussion,” the SSA
awarded the contract to Noridian without discussions. Id. at 13890, 13910, 13912.

       The SSA also, in the alternative, conducted a trade-off analysis using
different assumptions as to the costs of the offerors’ proposals. For this trade-off
analysis, she compared “the costs proposed by CGS and Palmetto prior to the cost
realism analysis – as compared to [Noridian]’s adjusted costs.” AR at 13911.
Through this analysis the SSA made an alternative finding, i.e., that a greater cost
premium was justified to obtain the advantages of Noridian’s proposal. See id. (“I
can state with certainty that I would nonetheless recommend an award to
[Noridian] based on . . . past performance and technical approach discriminators”
that outweigh a greater cost premium between CGS’s proposed costs and
Noridian’s adjusted costs). Thus, the SSA was willing to pay a price premium of
$28,530,314, or 8.3%, to obtain a “clearly superior technical approach” from
Noridian for the MAC contract for Jurisdiction E. Id. at 13890, 13911.

IV.   GAO Protest

       Both Palmetto and CGS filed timely protests of the contract award with the
Government Accountability Office (GAO). The consolidated protests were
described as “[p]rotests challenging the agency’s evaluation of proposals under the
solicitation’s past performance, implementation, and technical understanding
factors [and] challenging the agency’s evaluation of the protesters’ and awardee’s
proposed costs for realism, and upward adjustment of those costs.” GAO Opin. at
1. These protests were denied on the merits by GAO on January 18, 2013. Id. On
February 1, 2013, CGS and Palmetto filed bid protests in this court, bringing
similar challenges to the award of the MAC contract for Jurisdiction E to Noridian.

                                   DISCUSSION

                                          16
I.    Jurisdiction

       This court “shall have jurisdiction to render judgment on an action by an
interested party objecting to a solicitation by a Federal agency for bids or proposals
for a proposed contract or to a proposed award or the award of a contract or any
alleged violation of statute or regulation in connection with a procurement or a
proposed procurement.” 28 U.S.C. § 1491(b)(1) (2006). The jurisdictional grant is
“without regard to whether suit is instituted before or after the contract is
awarded.” Id. As a threshold jurisdictional matter, however, the plaintiff in a bid
protest must show that it has standing to bring the suit. Info. Tech. & Applications
Corp. v. United States, 316 F.3d 1312, 1319 (Fed. Cir. 2003) (ITAC); Myers
Investigative & Sec. Servs., Inc. v. United States, 275 F.3d 1366, 1369 (Fed. Cir.
2002) (citation omitted).

II.   Standards of Review

      A.     Judgment on the Administrative Record

      RCFC 52.1(c) provides for judgment on the administrative record. To
review a motion, or cross-motions, under RCFC 52.1(c), the court asks whether,
given all the disputed and undisputed facts, a party has met its burden of proof
based on the evidence in the record. Bannum, Inc. v. United States, 404 F.3d 1346,
1356 (Fed. Cir. 2005). The court must make fact findings where necessary. Id.
The resolution of RCFC 52.1 cross-motions is akin to an expedited trial on the
paper record. Id.

      B.     Bid Protest Review

       First, the plaintiff in a bid protest must show that it has standing to bring the
suit. ITAC, 316 F.3d at 1319. This may be accomplished by demonstrating that
the plaintiff was an actual bidder and that it was prejudiced by the award to the
successful offeror. Id. (citing Am. Fed’n of Gov’t Employees v. United States, 258
F.3d 1294, 1302 (Fed. Cir. 2001) (AFGE)). Prejudice is proven by establishing
that the plaintiff had a substantial chance of receiving the contract, but for the
alleged procurement error. Id. (citing Alfa Laval Separation, Inc. v. United States,
175 F.3d 1365, 1367 (Fed. Cir. 1999)).


                                           17
       As the United States Court of Appeals for the Federal Circuit has stated, “the
proper standard to be applied in bid protest cases is provided by 5 U.S.C.
§ 706(2)(A) [(2006)]: a reviewing court shall set aside the agency action if it is
‘arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with
law.’” Banknote Corp. of Am. v. United States, 365 F.3d 1345, 1350-51 (Fed. Cir.
2004) (citing Advanced Data Concepts, Inc. v. United States, 216 F.3d 1054,
1057-58 (Fed. Cir. 2000)); see also 28 U.S.C. § 1491(b)(4) (describing this court’s
standard of review for bid protests). Under this standard, a procurement decision
may be set aside if it lacked a rational basis or if the agency’s decision-making
involved a violation of regulation or procedure. Impresa Construzioni Geom.
Domenico Garufi v. United States, 238 F.3d 1324, 1332 (Fed. Cir. 2001) (citations
omitted). De minimis errors in the procurement process, however, do not justify
relief. Grumman Data Sys. Corp. v. Dalton, 88 F.3d 990, 1000 (Fed. Cir. 1996)
(citing Andersen Consulting v. United States, 959 F.2d 929, 932-33, 935 (Fed. Cir.
1992)). The bid protest plaintiff bears the burden of proving that a significant error
marred the procurement in question. Id. (citing CACI Field Servs., Inc. v. United
States, 854 F.2d 464, 466 (Fed. Cir. 1988)).

       The higher the degree of discretion allotted the contracting officer, the more
difficult it is for a protestor to prove that the procurement decision was arbitrary
and capricious. Burroughs Corp. v. United States, 617 F.2d 590, 597 (Ct. Cl.
1980) (citation omitted). Negotiated procurements give a “breadth of discretion”
to the contracting officer, and impose a heavier burden of proof on a protestor. Id.
at 598 (citation omitted). Similarly, “best value” contract awards give a
contracting officer more discretion than awards based on price alone. Galen Med.
Assocs., Inc. v. United States, 369 F.3d 1324, 1330 (Fed. Cir. 2004) (citing E.W.
Bliss Co. v. United States, 77 F.3d 445, 449 (Fed. Cir. 1996)). Thus, the
protestor’s burden is especially heavy in negotiated, best value procurements.
Banknote Corp. of Am. v. United States, 56 Fed. Cl. 377, 380 (2003) (citations
omitted), aff’d, 365 F.3d 1345 (Fed. Cir. 2004).

        The deference afforded to an agency’s decision must be even greater when a
trial court is asked to review a technical evaluation. “[T]echnical ratings . . .
involve discretionary determinations of procurement officials that a court will not
second guess.” E.W. Bliss, 77 F.3d at 449 (citations omitted); Omega World
Travel, Inc. v. United States, 54 Fed. Cl. 570, 578 (2002) (“It is well settled that
contracting officers are given broad discretion with respect to evaluation of
technical proposals.” (citing E.W. Bliss, 77 F.3d at 449)). “[W]here an agency’s

                                         18
decisions are highly technical in nature, . . . judicial restraint is appropriate and
proper.” Electro-Methods, Inc. v. United States, 7 Cl. Ct. 755, 762 (1985) (citing
Isometrics v. United States, 5 Cl. Ct. 420, 423 (1984)).

       “‘If the court finds a reasonable basis for the agency’s action, the court
should stay its hand even though it might, as an original proposition, have reached
a different conclusion as to the proper administration and application of the
procurement regulations.’” Honeywell, Inc. v. United States, 870 F.2d 644, 648
(Fed. Cir. 1989) (quoting M. Steinthal & Co. v. Seamans, 455 F.2d 1289, 1301
(D.C. Cir. 1971)). If, on the other hand, “the trial court determines [that] the
government acted without rational basis or contrary to law when evaluating the
bids and awarding the contract[,] . . . it proceeds to determine, as a factual matter, if
the bid protester was prejudiced by that conduct.” Bannum, 404 F.3d at 1351.
The protestor again bears the burden of proof, and must “show that there was a
‘substantial chance’ [the plaintiff] would have received the contract award but for
the [government’s] errors in the bid process.” Id. at 1358 (citations omitted). If a
protestor can show that, but for the procurement error of the agency, there was a
substantial chance that it would have won the contract award, prejudice has been
established. Id. at 1353 (citations omitted). “Prejudice is a question of fact.” Id.
(citing Advanced Data Concepts, 216 F.3d at 1057).

III.   Standing

       Neither the government nor Noridian contests plaintiffs’ standing to bring
these consolidated bid protests. But for the procurement errors alleged by
plaintiffs, CGS and Palmetto each had a substantial chance of winning the MAC
contract for Jurisdiction E. Plaintiffs therefore have standing to bring this protest.
ITAC, 316 F.3d at 1319.

IV.    Analysis

       The court first addresses two threshold issues of a general nature. The court
then proceeds to examine the three major areas of contention in this case: the past
performance evaluation, the technical understanding evaluation, and the cost
adjustments produced as a result of the agency’s cost realism analysis. None of
plaintiffs’ challenges invalidate the agency’s best value award decision.



                                           19
       A.      Scope of Review Limited to the Contemporaneous Record Before
               the Agency

        The court first addresses the scope of its review. CGS has emphasized that
CMS’s award decision must stand or fall based on the reasoning presented in the
contemporaneous documentation produced by the TEP, the BEP, and the
contracting officer. See, e.g., CGS Mot. at 11, 13-14. The court agrees that the
best evidence of rational decision-making in this procurement must be found in the
contemporaneous documents produced by the agency. See, e.g., OMV Med., Inc. v.
United States, 219 F.3d 1337, 1344 (Fed. Cir. 2000) (citing Bowman Transp., Inc.
v. Arkansas–Best Freight Sys., Inc., 419 U.S. 281, 285-86 (1974); SEC v. Chenery
Corp., 332 U.S. 194, 196 (1947)); see also Axiom Res. Mgmt., Inc. v. United
States, 564 F.3d 1374, 1379-80 (Fed. Cir. 2009) (citing Murakami v. United States,
46 Fed. Cl. 731, 735 (2000), aff’d, 398 F.3d 1342 (Fed. Cir. 2005)). In this case,
the court has determined that it need not consider the documents produced by the
parties during the GAO litigation.12 The contemporaneous documents produced by
CMS as it made its award decision provide an adequate factual background to
resolve all of plaintiffs’ challenges in this bid protest. The court now turns to
plaintiffs’ argument that the evaluation of their proposals did not contain adequate
detail.

       B.      The Agency’s Decision to Generally Limit its Evaluative
               Commentary to Distinctive Strengths, Weaknesses, and
               Significant Weaknesses, and to Omit Mention of Features
               Proposed by the Offerors Which Simply Met RFP Requirements,
               Was Rational

      The TEP explicitly noted in the TEP report that certain features of an
offeror’s proposal would not be mentioned in the report if those features simply
met the requirements stated in the RFP:

               As instructed by the Contracting Officer, and in
               accordance with the Source Selection Plan, each TEP
               member documented strengths, weaknesses, significant
               weaknesses, deficiencies, and other notes identified

       12
         / Thus, the following portions of the administrative record will not be cited in this
opinion: AR Tabs 1-8, 66-88.

                                                20
             during their independent review of the full written
             proposals as well as during oral presentations in relation
             to the three factors noted above. If a particular feature of
             a proposal is not included in the TEP report or in the
             findings files, it means the TEP considered that feature
             “met” by the Offeror. That is to say, the feature did not
             warrant a strength or a weakness and was viewed by the
             TEP to have met the Government’s expectations.

AR at 5170. This approach was consistent with the Source Selection Plan. See id.
at 2275, 2277, 2282-83. In particular, the SSP required the TEP report to “clearly
and concisely describe[] proposal evaluation consensus results, including the
strengths, weaknesses, significant weaknesses, deficiencies associated with each
proposal based upon the evaluation factors for award.” Id. at 2275 (emphasis
added). The court finds that the decision to omit a discussion of every feature of
an offeror’s proposal which simply met contract requirements was rational.
Indeed, given the level of detail provided by each offeror, no other approach, in the
court’s view, would have been practical.

       CGS seizes upon the fact that certain features of its technical proposal did
not receive comment by the TEP, and suggests that these omissions indicate
arbitrary decision-making, disparate treatment of offerors, inadequate
documentation of the evaluation process, and other violations of procurement
principles. See, e.g., CGS Mot. at 32, 33 n.13, 34, 36 n.16; CGS Reply at 19, 22-
24. The court cannot agree. The TEP report could not have commented on every
feature of the offerors’ proposals and still have provided a concise, useful
evaluation summary to assist the SSA in her award decision. As to the caselaw
cited by CGS, the court finds nothing in these authorities which indicates that a
procuring agency must comment on every feature of an offeror’s proposal. The
court finds that the TEP report, the narrative portion of which runs approximately
forty pages, adequately documents the evaluation of the offerors’ proposals.

      C.     Past Performance

       As stated earlier in this opinion, the court finds that the past performance
evaluation scheme employed by CMS was rational, because it relied on meaningful
distinctions in the offerors’ most relevant past performance to determine, first, a
baseline numeric rating for past performance, which was then adjusted after a

                                          21
thorough and comprehensive review of all of the past performance data available to
the agency.13 The court turns to plaintiffs’ other substantive arguments regarding
the past performance evaluation of the offerors’ proposals.
Both CGS and Palmetto argue, at considerable length, that the baseline scores and
the final scores for past performance assigned to their proposals were flawed.
Plaintiffs also contend that Noridian’s past performance ratings were too high.
None of these arguments have merit. The court begins with the arguments
presented by CGS as to its past performance ratings.

              1.     CGS’s Past Performance Ratings Were Not Irrational

       Aside from CGS’s numerous but unpersuasive contentions that the agency’s
past performance evaluation scheme was fundamentally flawed, see supra, CGS
also complains that its past performance ratings by the TEP and SSA were
subjective, arbitrary and reflected unequal treatment. See CGS Mot. at 42-43; CGS
Reply at 32-33. The court notes first that there is often some element of
subjectivity in proposal evaluations, and that this phenomenon is not necessarily
objectionable. See, e.g., Sci. Applications Int’l Corp. v. United States, 108 Fed. Cl.
235, 274 (2012) (“[T]hese subjective and diverse evaluation determinations of risk
to the government are well within the substantial discretion of the Agency and not
for the court to second-guess.”). Here, the record clearly shows that CMS
principally relied upon objective evaluation criteria to determine each offeror’s
past performance rating, and did not overly engage in subjective adjustments to the
baseline numeric scores of the offerors.

       As to CGS’s other allegations of error, the primary objection is that
“satisfactory,” “good,” “very good,” and “excellent” adjectival ratings in CGS’s
NIH/CPARS reviews only translated to “lackluster” .65 past performance ratings
when converted to numeric scores by the TEP. CGS Reply at 31-32 (citing AR at
5205). At oral argument, CGS pointed to its “lackluster” past performance rating
and suggested that the agency’s past performance rating scheme turned CGS’s
good performance reviews into bad reviews. Tr. at 37. As stated supra, however,
the court finds that the agency’s evaluation scheme was rationally and uniformly


       13
         / The court divided plaintiffs’ arguments concerning the agency’s past performance
evaluation into two types: (1) arguments attacking the past performance evaluation scheme are
discussed in the background section of this opinion; (2) arguments attacking particular past
performance evaluation findings and ratings are discussed here.

                                              22
applied to a significant amount of relevant past performance information. None of
CGS’s arguments persuade the court that CGS’s baseline past performance score
of .66 and its final past performance rating of .7 were irrational, arbitrary, or the
product of unequal treatment of the offerors’ proposals.

               2.      Palmetto’s Past Performance Ratings Were Not Irrational

      Like CGS, Palmetto offered numerous but unpersuasive arguments that the
agency’s past performance evaluation scheme was fundamentally flawed. See
supra. In addition, Palmetto alleges that several errors marred the scoring of its
past performance, most of which relate, primarily, to the downward adjustment of
Palmetto’s baseline score (.58) to arrive at .4 for Palmetto’s final past performance
score. The court will briefly address each of these arguments.

                       a.     Transition Period Difficulties

       Palmetto argues that it received lower performance evaluations during a
unique and difficult “transition period” and that these low ratings, at the very least,
should have been heavily discounted by CMS. Palmetto Mot. at 20-21. The
agency responds that there was nothing unfair in the weight accorded the low
performance ratings which reflected performance problems which lingered well
after an initial transition period. Def.’s Mot. at 27-29. The court must agree with
defendant. To the extent that these low ratings contributed to Palmetto’s baseline
past performance score and the adjustment to that score, the court finds nothing
irrational in the agency’s consideration of the “transition period” ratings received
by Palmetto.

                       b.     Precipitous Drop from .58 to .4

       It is clear that Palmetto’s baseline past performance score received a greater
adjustment than the baseline scores of CGS and Noridian (negative .18 compared
to positive .04 or positive .05). The court, however finds nothing irrational in the
magnitude of the adjustment.14 Palmetto argues that there is no “coherent


       14
        / A score of .58 can reasonably be interpreted to be within the “Equally likely to
succeed or fail” range (described as .5 on the scale adopted by the agency), rather than in the
“More likely to succeed than fail range” (described as .6 to .9). AR at 2293. Thus, Palmetto’s
                                                                                         continue...

                                                23
explanation” for the adjustment. Palmetto Mot. at 22. The TEP report, however,
contains a lengthy analysis of Palmetto’s past performance, based on information
gleaned from numerous sources. AR at 5184-90. As to the adjustment, in
particular, the TEP gave a coherent explanation, in the court’s view. First, a
general description of the adjustment was given:

               The TEP . . . focused on whether the strengths and
               weaknesses outlined in the sections above should
               increase or decrease the overall rating and to what
               degree. Based on the significant weaknesses highlighted
               above, the TEP concluded that the impact of poor
               performance in these areas is so severe and the risk to the
               Medicare program is so high, that these failures warrant a
               significant reduction in the rating.

Id. at 5189. Second, the TEP noted two particular failings that warranted a
reduction in Palmetto’s baseline past performance score. Id. Under the deferential
standard of review applicable here, the downward adjustment of Palmetto’s
baseline past performance score was not irrational.

                      c.      Most Recent Option Year Information

       Palmetto argues that there are problems with the agency’s consideration of
its CPARS evaluation for Option Year 3 of its Jurisdiction 1 MAC contract.
Palmetto Mot. at 16-17. Defendant concedes that this past performance
information was not considered by the TEP in calculating a baseline score for
Palmetto, but notes that this information only became available after the TEP had
completed its analysis of past performance. Def.’s Mot. at 20. The court finds no
fault with the agency’s consideration of Option Year 3 only in the adjustment of
Palmetto’s baseline score, rather than in the baseline score itself.

       Furthermore, according to Noridian, the Option Year 3 ratings would have
only increased Palmetto’s baseline score by .01, raising the baseline score from .58
to .59. Noridian Mot. at 33. The record supports Noridian’s contention. See AR


       14
          / ...continue
fall from .58 to .4 could reasonably be described as slipping from “Equally likely to succeed or
fail” to “More likely to fail than to succeed” (.1 to .4). Id.

                                               24
at 5190; Palmetto Mot. at 16. Even if the omission of the Option Year 3
information in Palmetto’s baseline score could be considered to be an evaluation
error on the part of CMS, the de minimis alleged error in Palmetto’s baseline score,
which only provided a framework for the development of a final past performance
score, did not prejudice Palmetto.

      Palmetto also does not find the SSA’s explanation of her consideration of
the Option Year 3 information to be reasonable, adequately documented or
supportable. Palmetto Mot. at 16-17. The court disagrees. After a lengthy
discussion of Palmetto’s improved performance during Option Year 3, she
concluded that:

               The above recent information demonstrates that
               Palmetto’s performance has improved and is definitely a
               benefit to the agency and the provider community. The
               SSA considers this a strength with long term cost saving
               implications and shows that Palmetto is making strides to
               correct deficits in its past performance. However, it is
               the opinion of the SSA that this performance does not
               reach the level of consistent, reliable excellence
               demonstrated by the [Noridian] past performance and
               does not warrant, at this time, a change in Palmetto’s past
               performance rating. In fact, it was the opinion of the
               SSA, prior to its review of the J1 OPY 3 CPARs review
               that the Palmetto past performance was such that its
               numeric rating actually warranted being lowered because
               of those SSA concerns related to the significant
               weaknesses assigned by the TEP . . . .

AR at 13905. The SSA then proceeded to detail exactly which past performance
deficits justified keeping Palmetto’s past performance final score at .4. Id. at
13905-06. The court finds nothing unreasonable, inadequate or unsupportable in
the SSA’s explanation of her consideration of the Option Year 3 past performance
information for Palmetto.15


       15
          / Palmetto casts aspersions on the explanation provided by the SSA, describing her
final, unchanged past performance rating of Palmetto as an “improbable coincidence,” Palmetto
                                                                                       continue...

                                               25
                      d.     Allegations of Impermissible Double-Counting of
                             Palmetto’s Past Performance Data

       Palmetto argues that one of the flaws in its past performance rating was the
“double-counting” of certain performance periods (those which included Award
Fee reviews) as opposed to the “single-counting” of another performance period
(which was not eligible for Award Fees). Palmetto Mot. at 18-19; Palmetto Reply
at 10-11. This argument is not persuasive. The agency included an average
Award Fee score, and each NIH and CPARS score then available, to calculate a
baseline past performance score. CMS coherently explained why such a
calculation provided a framework for the further consideration of all past
performance information available for Palmetto and the other offerors. AR at
5172-77. Both the baseline score methodology and the adjustment methodology
were reasonable. To the extent that Palmetto argues that its best performance
period was undervalued by CMS in the final past performance rating assigned
Palmetto, the record does not reveal that the agency’s consideration of that
performance period was irrational.

       At oral argument, Palmetto also complained that one of the primary past
performance source documents, CPARS, contains subsidiary measures that the
agency double-counted. Tr. at 45. It is certainly possible that information
contained in the CPARS evaluations contributed to both an offeror’s baseline score
and the adjustment to that score. In the court’s view, however, that does not equate
to impermissible double-counting.16 The baseline score was a framework to guide
further weighing of all of the past performance information relevant to that offeror.
Negative information might be considered twice, i.e., applied for more than one
purpose, but only to arrive at a single, final, and comprehensive rating for each

       15
          / ...continue
Mot. at 17, and “plainly unsupported and unsupportable,” Palmetto Reply at 9. The court notes
that “there is a presumption . . . that government [procurement] officials act in good faith.”
Galen Medical, 369 F.3d at 1335 (citing Am-Pro Protective Agency, Inc. v. United States, 281
F.3d 1234, 1239 (Fed. Cir. 2002)) (emphasis removed). That presumption has not been rebutted
here.
       16
         / The TEP showed an awareness of double-counting that might skew a past
performance evaluation, and eliminated at least one form of double-counting. See AR at 5176
(“Because the NIH/CPARS and Award Fees reports had already been incorporated into the
baseline score, the TEP did not assign new findings (e.g. strengths or weaknesses) based on the
narrative in those reports.”).

                                               26
offeror’s past performance. The court notes that considering certain pieces of
information relevant to an offeror’s proposal for multiple purposes during
evaluation proceedings is not impermissible. E.g., EBA Eng’g, Inc., B-275818,
97-1 CPD ¶ 127, 1997 WL 150996, at *9 (Comp. Gen. Mar. 31, 1997). The court
finds nothing irrational or impermissible in the past performance rating of
Palmetto’s proposal.

              3.     Noridian’s Past Performance Ratings Were Not Irrational

       CGS complains that Noridian received a near-perfect past performance score
that was not justified in light of the information contained in the past performance
source documents before the agency. As a threshold matter, there are factual flaws
in plaintiff’s argument. Furthermore, the court cannot agree with CGS’s
proposition that the final past performance score for Noridian was irrational.

       First, CGS contends that Noridian received .87 out of a possible .875 for
past performance – a near-perfect score. CGS Mot. at 43; CGS Reply at 34. The
maximum score for past performance was .9, not .875. AR at 5171-72. It might be
said that Noridian received a near-perfect score for past performance, but CGS has
mischaracterized the distance between .87 and a perfect score.

       Second, CGS suggests that the .05 upward adjustment of Noridian’s baseline
past performance score (.82) must have been entirely based on past performance
other than that reviewed in the NIH, CPARS and Award Fee evaluations. The
record does not support this assertion. The TEP considered all past performance
information in making its adjustments to the baseline past performance scores, see
AR at 5176, and CGS mistakenly excludes the NIH, CPARS and Award Fee
evaluations as source documents considered for the adjustments made to the
baseline past performance scores.17

     Based on this misreading of the record, CGS argues that certain source
documents do not support a “near-perfect” rating for Noridian’s past performance.

       17
         / CGS relies heavily on a statement in the SSM which, when summarizing the TEP’s
past performance evaluation methodology, neglects to mention every source document
considered by the TEP in its adjustments to baseline scores. See AR at 13858. The next
sentence in the SSM, however, directs the reader to “[a] complete description of the Past
Performance evaluation, and the methodology used, . . . contained within
Section II of the TEP Report”). Id.; see also supra note 10.

                                            27
CGS Mot. at 44-45; CGS Reply at 35-36. The court has reviewed the TEP’s
explanation for Noridian’s baseline score of .82 and for the .05 adjustment to that
baseline score and finds nothing irrational in those scores, despite the less-than-
perfect ratings received by Noridian on some measures highlighted by CGS. See
AR at 5195-98. The court agrees with Noridian, Noridian Mot. at 32, that under
the deferential review this court accords past performance evaluations in negotiated
procurements, Noridian’s past performance scores were not erroneous. See, e.g.,
Overstreet Elec. Co. v. United States, 59 Fed. Cl. 99, 117 (2003) (noting that past
performance evaluations in this type of procurement are accorded a “triple
whammy of deference”).

      For all of the above reasons, the agency’s past performance evaluation of the
offerors’ proposals has not been shown to be arbitrary, capricious or contrary to
law.

       D.      Technical Understanding

       Plaintiffs challenge numerous aspects of the agency’s technical
understanding ratings for the offerors. Noridian received a perfect score (.9) for
this evaluation factor, so it is not surprising that plaintiffs attack the strengths
assigned to Noridian’s technical proposal in an effort to invalidate that score.
Plaintiffs also attack certain weaknesses assigned to their proposals, and further
argue that certain of their proposed approaches, which were not rated as strengths,
were comparable to Noridian’s proposed features which were rated to be strengths;
thus, plaintiffs argue that their proposals received unequal treatment. The court’s
discussion of plaintiffs’ arguments will be limited to plaintiffs’ principal
contentions.18

               1.      RapidApp

      The MAC for Jurisdiction E will enroll physicians and other medical
providers into the Medicare program. See AR at 369-71. For this task of provider

       18
         / The court has considered all of plaintiffs’ arguments. Arguments not specifically
addressed in this opinion section were not at all persuasive. In many instances, the court was
asked to substitute its judgment for that of the agency’s as to the merits of highly technical
features of the offerors’ proposals. The court defers, however, to the agency’s expertise, as it
must. E.W. Bliss, 77 F.3d at 449 (stating that the court should not second-guess “technical
ratings and [other] discretionary determinations of procurement officials”) (citation omitted).

                                                28
enrollment, Noridian proposed to automate the process using a system called
RapidApp. Id. at 2502-03. The agency relied upon its evaluation of RapidApp
both for the technical understanding evaluation factor and its cost realism analysis
of proposals. In this opinion section, the court focuses only on the agency’s
appraisal of the value of RapidApp as support for Noridian’s .9 technical
understanding rating.19

       In broad strokes, the evaluation of RapidApp occurred in three phases. First,
the TEP awarded Noridian a strength for RapidApp, and noted this strength in the
TEP report. AR at 5193 (“The TEP also assigned a strength to the Offeror’s
proposed innovation, RapidApp.”). Second, in a series of emails, members of the
TEP were alerted in August 2012 that RapidApp, which was being tested through a
pilot program, might not provide as much functionality as their earlier evaluation
had estimated, because of changes in the agency’s provider enrollment system
(Provider Enrollment, Chain Ownership System (PECOS)). E.g., id. at 13828-29.
The TEP report was not revised to delete the strength for RapidApp, however.
Third, the SSA considered the changes to PECOS and the reduced probable
functionality of RapidApp. She concluded that RapidApp continued to be a
strength for the technical understanding evaluation factor. Id. at 13894-95.

      The issue before the court is whether the continued appreciation for
RapidApp by the SSA, i.e., that RapidApp was an innovation constituting a
strength in Noridian’s technical proposal despite its reduced probable functionality,
was rational. The key passage in the SSM which illuminates this issue is
reproduced here:

                 RapidApp is a very progressive innovation with the
                 potential to achieve all that the TEP has indicated. It
                 automates [the] provider enrollment process by
                 eliminating many manual processes; and demonstrates a
                 strong understanding of what is required to improve the
                 provider enrollment process. However this application
                 has not been in place for a sufficient period of time to
                 demonstrate that it will achieve everything that is
                 proposed. In addition, subsequent to the completion of
                 the TEP Report, it was learned that CMS may disallow

      19
           / The court must, regrettably, split its discussion of RapidApp into two parts.

                                                  29
             the front-end on-line “interview” feature of RapidApp,
             since CMS [in PECOS] now has on-line data entry
             screens available for provider applications. However, the
             “back-end” automation features of RapidApp may be
             retained and may serve to foster efficiencies in the
             administration of the Medicare . . . program. In the
             opinion of the SSA, despite this new information,
             RapidApp remains a strength within [Noridian’s]
             proposal. That notwithstanding, it is my conclusion that
             this new information does not change the overall
             conclusions reached by the TEP in relation to
             [Noridian’s] technical proposal because the use of this
             technology for the above “back-end[”] processes with its
             attendant efficiencies is of value to the agency.
             RapidApp is an innovation indicative of [Noridian]
             striving to make use of technological advances to
             promote efficiencies in the Medicare . . . program[.]

AR at 13894-95. As the court understands the SSA’s references to RapidApp’s
front-end and back-end features, the front-end features assist in the automation of
the provider’s submission of an application for entry into the Medicare program,
whereas the back-end features assist in the verification of data contained in the
application. See id. at 2502, 13894-95, 15920.

       Plaintiffs argue that RapidApp, at the time the SSA made her award
decision, no longer qualified as an innovation worthy of a strength. CGS, for
example, states that RapidApp does not meet the definition of an innovation that
merits a strength. CGS Mot. at 24. In support of this contention, CGS relies
heavily on the email correspondence between TEP members regarding the possible
cancellation of the pilot program for RapidApp. Id. at 24-25. Plaintiffs also rely
on an email from Noridian to CMS which noted certain incompatibilities between
RapidApp and CMS systems. See AR at 13829. CGS argues that in the face of
uncertainties over RapidApp’s functionality, the SSA impermissibly rewrote
Noridian’s proposal to offer only the back-end features of RapidApp, and chose to
irrationally ignore the definition of an innovation provided in the RFP to award a
strength to Noridian for RapidApp. CGS Mot. at 25 n.8, 26-27.




                                         30
       The definition of an innovation that merits a strength is provided in the SSP,
not the RFP.20 The relevant passage states:

               Strengths may relate to demonstrated or proposed process
               improvements and innovations that either exist in the
               Offeror’s current operations or that the Offeror newly
               proposed for the contract. When strength is asserted for
               proposed innovations and/or process improvements, the
               Offeror must document that its proposed innovation or
               process improvement is realistic and the expected
               outcome is highly likely to occur. Simply meeting the
               Statement of Work requirements is not considered a
               strength.

AR at 2294. Using this definition, the SSA properly concluded that the back-end
automation of provider enrollment offered by Noridian in its RapidApp innovation
was a strength. She did not re-write Noridian’s proposal to do so.21 Although
plaintiffs may disagree with the SSA’s conclusion, the court does not find that the
record before the agency shows that the strength awarded to RapidApp was
arbitrary or capricious.

               2.     Integrated MSNs

       Noridian proposed to integrate certain mailings to a beneficiary (Medicare
Summary Notices (MSNs)) into one envelope, thus realizing certain efficiencies in
workload processing, AR at 5245 (“The Offeror has integrated Pt. A, Pt. B, and
[Durable Medical Equipment] Medicare Summary Notices for the same beneficiary
into one mailing envelope.”), and was awarded a strength for this “innovation,” id.




       20
         / The section of the RFP cited by CGS merely gives instruction to the offerors as to the
definition of innovations, so that these features may properly be identified as innovations in the
offerors’ proposals. See AR at 2088. There is nothing in this section of the RFP which sets forth
the definition of an innovation that would be evaluated by CMS as a strength.
       21
        / As Noridian argues, the SSA could rationally read Noridian’s description of
RapidApp as providing separate back-end and front-end features of independent value to the
agency. See Noridian’s Mot. at 20.

                                               31
at 5245, 5193-94.22 However, as the agency has conceded, integrating MSNs into
one envelope was practiced by other MACs and the strength award to Noridian for
this aspect of its proposal was erroneous. Def.’s Mot. at 45-46.

       CGS asserted at oral argument that this error in Noridian’s technical
understanding evaluation was prejudicial to CGS. Tr. at 30-31; see also id. at 8
(asserting that “any error is this procurement is material” and that “if any error is
demonstrated [CGS] will have an opportunity to prevail in this competition”). The
court must disagree. The TEP did not cite integrated MSNs as one of the
significant strengths of Noridian’s proposal when summarizing their deliberations
as to the technical understanding evaluation factor. AR at 5195. Similarly, the
SSA did not specifically mention integrated MSNs when she listed the more
notable strengths identified in Noridian’s proposal. See id. at 13893, 13895,
13897, 13909-11. Indeed, integrated MSNs were described as being of “somewhat
less value than [Noridian’s] more notable strengths.” Id. at 13893. In that regard,
the inquiry into prejudice begins with this recognition on the part of the SSA that
Noridian’s integrated MSNs “innovation” was less valuable than other strengths.

       Although CGS insists that this “competition was close,” CGS Mot. at 1, the
gulf between CGS and Noridian was considerable, in both the past performance
and technical understanding factors, the two most heavily weighted non-cost
factors. In past performance, CGS received a score of .7, whereas Noridian
received a score of .87, and in technical understanding, CGS received a score of .6,
whereas Noridian received a score of .9. The gulf was similarly great between
CGS’s overall technical rating, .68, and Noridian’s final overall technical rating,
.85. This gulf could not have been bridged, in the court’s view, by the elimination
of one of the lesser-valued strengths awarded in error to Noridian’s proposal.

    As to the impact of the elimination of a strength for Noridian’s integrated
MSNs on the SSA’s trade-off analysis and best value award decision, the record is



       22
           / Noridian argues that the strength awarded to Noridian was for integrated MSNs and
other workload processing improvements; therefore, Noridian argues, the integrated MSNs
feature of Noridian’s proposal was merely “one element of this strength.” Noridian Mot. at 37.
It is not clear from the record that CMS made such a distinction. In any case, whether the
integrated MSNs were considered an independent strength or an element of a workload
processing improvements strength matters little to the court’s analysis here.

                                              32
clear that such a correction would not have changed the calculus in CGS’s favor.23
The narrative of the SSM shows a strong preference for the technical superiority of
Noridian’s proposal, and expresses that preference in emphatic terms. Much of the
preference expressed by the SSA depends on the value to CMS of the “more
notable” strengths in Noridian’s proposal. The strength related to integrated MSNs
is not of sufficient weight to tip the balance in favor of CGS if that strength were
eliminated.

       Finally, the court notes that the SSA provided an alternative trade-off
analysis, discussed supra, where she indicated that she would pay a large price
premium for the advantages of Noridian’s proposal. In light of such a statement in
the SSM, it is not rational, in the court’s view, to conclude that the evaluation of
proposals was “close,” or that the strength awarded in error to Noridian for
integrated MSNs was prejudicial to plaintiffs. Noridian’s proposal stood head and
shoulders above the others, and the deletion of one less notable strength would not
have changed the agency’s award decision. For all of the above reasons, the error
in the rating of Noridian’s integrated MSNs was not prejudicial to plaintiffs, and
this error does not justify the court’s intervention in this procurement. See, e.g.,
Bannum, 404 F.3d at 1357 (stating that “the prejudice determination assesses
whether an adjudged violation of law warrants setting aside a contract award”).

               3.      Audit and Reimbursement Weakness

       Both plaintiffs proposed the same subcontractor, First Coast Service
Options, Inc. (FCSO), for certain Audit and Reimbursement functions. FCSO’s
Audit and Reimbursement plan was deemed to constitute a technical understanding
weakness in the proposals submitted by CGS and Palmetto. Plaintiffs argue that
this weakness was undeserved, for a variety of reasons. The evaluation of FCSO’s
Audit and Reimbursement plan is alleged to have been irrational, unequal,
unreasonable, inexplicable, conclusory, unsupported, and opaque. CGS Mot. at
35-36; Palmetto Mot. at 35-40. The three most substantial arguments in this regard
are that: (1) the Subject Matter Expert (SME) wrongly criticized FCSO’s
assumptions in the Audit and Reimbursement plan; (2) the weakness in the
technical understanding factor is inconsistent with the agency’s cost realism



       23
           / The rejection of Palmetto’s proposal, given its lower ratings and higher price, would
also still have occurred.

                                                33
analysis; and, (3) Noridian’s Audit and Reimbursement assumptions were far
worse than FCSO’s, and thus should have been noted as a weakness, but were not.

       The court turns first to plaintiffs’ arguments which focus on the SME’s
judgment that FCSO made three inaccurate assumptions in its Audit and
Reimbursement plan. CGS expresses disagreement with the SME’s criticism of
FCSO’s plan. See CGS Mot. at 35 n.14 (suggesting that one of FCSO’s
assumptions was sufficiently accurate as an estimate); CGS Reply at 26-27
(arguing that FCSO’s assumptions reflect estimates that the agency has “accepted”
in the incumbent Jurisdiction 1 contract). Palmetto, for its part, suggests that the
SME’s criticism of FCSO’s Audit and Reimbursement plan was unsupported,
conclusory, and opaque. Palmetto Mot. at 36-38. Palmetto asserts, in particular,
that the SME’s criticism of FCSO’s assumptions “was unsupported by any
alternative workload assumptions.” Id. at 35.

       Defendant notes that the TEP, relying upon the SME’s advice, concluded
that “[t]he sum of these three [inaccurate FCSO] assumptions . . . evidenced a risk
that an important aspect of the contract – fraud prevention – might not be fully
accomplished.” Def.’s Mot. at 43. Noridian, Noridian Mot. at 42, points to the
TEP’s concern that there were risks in FCSO’s Audit and Reimbursement plan:

               The TEP agrees with the . . . SME that these assumptions
               are not realistic and pose the risk that Cost Reports will
               not be settled accurately, resulting in inappropriate
               payments to institutional providers. This demonstrates a
               weakness in its technical approach that may result in
               lower quality of work and poor stewardship of trust fund
               dollars.

AR at 5228. Both the government and Noridian argue that it was reasonable for
the SME to criticize FCSO’s assumptions in its Audit and Reimbursement plan, in
light of the explanatory text of the proposals CGS and Palmetto submitted to the
agency.24 Def.’s Mot. at 41-42; Noridian Mot. at 43.


       24
          / Neither plaintiff provided record cites to the court which would have identified the
pages which contain the challenged assumptions in FCSO’s Audit and Reimbursement plan.
Plaintiffs’ contentions that these assumptions are accurate lack persuasive force, because, among
                                                                                         continue...

                                                34
       Although plaintiffs would prefer that this court substitute its judgment for
that of the SME and the TEP, such a course of action is not proper. E.g., Ala.
Aircraft Indus., Inc.-Birmingham v. United States, 586 F.3d 1372, 1376 (Fed. Cir.
2009) (Alabama Aircraft) (citing Motor Vehicle Mfrs. Ass’n v. State Farm Mut.
Auto. Ins. Co., 463 U.S. 29, 43 (1983)). The contemporaneous record of this
procurement shows a rational basis for the concerns expressed by the SME and the
TEP. The court therefore defers to the agency’s expertise in this regard.

       CGS and Palmetto also argue that the weakness assigned to FCSO’s Audit
and Reimbursement plan (for having inaccurately underestimated certain tasks) is
inconsistent with the agency’s cost realism analysis. See CGS Mot. at 35; Palmetto
Mot. at 38-39. Plaintiffs note that FCSO’s proposed hours of work for Audit and
Reimbursement exceeded the SME’s estimate, and that the BEP made no
adjustment to the hours proposed by FCSO for this function. While there is a
superficial attractiveness to this argument, it rests on the premise that an adequate
estimate of total Audit and Reimbursement hours of labor necessarily validates
every assumption in FCSO’s Audit and Reimbursement plan as to expected tasks.
Defendant suggests that plaintiffs are comparing apples to oranges, because the
technical understanding weakness was not questioning the number of hours
proposed by FCSO, but the “level and depth of audits and payment dispute
resolution.” Def.’s Mot. at 42. Defendant’s point is well-taken. It was not
irrational for the agency to find a weakness in FCSO’s assumptions as to Audit and
Reimbursement tasks, while at the same time approving FCSO’s estimate of total
hours needed for Audit and Reimbursement labor.

       Finally, plaintiffs suggest that because Noridian’s assumptions as to Audit
and Reimbursement total hours were adjusted by the BEP, Noridian should also
have received a weakness under the technical understanding factor for its Audit
and Reimbursement plan. CGS Mot. at 35-36; Palmetto Mot. at 39-40. Plaintiffs
note that FCSO’s total hours projection exceeded the SME’s estimate, whereas
Noridian’s proposed total hours for Audit and Reimbursement fell short of that
estimate. Furthermore, Palmetto suggests that Noridian “made several other
unreasonable . . . assumptions regarding the Audit and Reimbursement workload.”



       24
         / ...continue
other reasons, plaintiffs have not provided the necessary factual underpinnings for their
arguments.

                                                35
Palmetto Mot. at 40. Plaintiffs thus contend that Noridian received unjustifiably
better treatment in the evaluation of its Audit and Reimbursement plan.

       As discussed supra, an adjustment to Noridian’s Audit and Reimbursement
proposed hours is not necessarily synonymous with a weakness in its technical
understanding of Audit and Reimbursement tasks. As defined in the SSP, a
weakness is “a flaw in the proposal that increases the risk of unsuccessful contract
performance.” AR at 2294. As defendant explains, CMS could rationally find a
risk in the inaccurate assumptions in FCSO’s Audit and Reimbursement plan, but
no risk in Noridian’s low estimates for certain labor hour categories in its Audit
and Reimbursement plan. See Def.’s Reply at 17 (“[A]lthough Noridian
underestimated the time needed to accomplish certain tasks, F[CS]O
underestimated the complexity and depth of the tasks involved.”).

       Plaintiffs’ burden is to show that the evaluation of Noridian’s Audit and
Reimbursement plan was arbitrary or irrational. The court finds that CMS
rationally assigned a weakness to FCSO’s Audit and Reimbursement plan, and
rationally did not assign a weakness to Noridian’s Audit and Reimbursement
plan.25 In other words, plaintiffs have failed to show that the TEP erroneously
evaluated Noridian’s Audit and Reimbursement plan as meeting the contract
requirements set forth in the RFP. For all of the above reasons, the court does not
find that the agency’s evaluation of the offerors’ Audit and Reimbursement plans
under the technical understanding factor was arbitrary or capricious.

              4.      A Dedicated CERT Coordinator, and Other Medical
                      Review Issues

       CGS attacks the agency’s technical understanding ratings on a multitude of
fronts, and groups some of its arguments related to the evaluation of personnel and
program strategies under this heading: “The CMS Medical Review Strategy and
CERT Reduction Plan Evaluation Was Arbitrary and Unequal.” CGS Mot. at 28.
The basic thrust of these arguments is that CGS proposed good strategies and
equivalent personnel solutions in these areas, yet its proposal received weaknesses


       25
         / Palmetto notes a particular instance of an inaccurate assumption in Noridian’s Audit
and Reimbursement plan. Palmetto Mot. at 40 (citing AR at 11032). The record page cited does
not indicate, however, that a weakness in technical understanding should have been assigned to
Noridian for this single inaccurate assumption. See id.

                                              36
where Noridian received strengths. Id. at 28-31. The court begins with an
overview of the proposal features (and terminology) at issue in these alleged
evaluation errors.

                   a.    Medical Review and CERT Defined

      Medical Review is described by the agency in the following manner:

            Medical review and local provider education and
            training. The . . . MAC will reduce the claims payment
            error rate by identifying, through analysis of data and
            evaluation of other information, program vulnerabilities
            concerning coverage and coding made by individual
            providers; and by taking the necessary action to prevent
            or address the identified vulnerabilities.

AR at 129. The RFP introduced its extensive discussion of Medical Review with
this statement: “The Contractor shall decrease the paid claims error rate and
address Medical Review (MR) –related coverage, coding, and billing errors . . . .”
Id. at 425. The MAC for Jurisdiction E was required to “develop a problem-
focused, outcome-based [Medical Review] strategy.” Id. Thus, although Medical
Review has many aspects, the primary goal appears to be reducing the Medicare
“claims payment error rate.” Id. at 129.

       A related term is “Comprehensive Error Rate Testing (CERT).” AR at 428;
see also id. at 485 (alternatively described as Compliance Error Rate Testing).
This is just one source of data that is used in a Medical Review strategy. Id. at
426. The RFP contains a description of CERT:

            CMS has developed the CERT program to produce
            national, Contractor-specific, and service-specific paid
            claim error rates. The program has independent
            reviewers who identify random samples of Medicare
            claims when they enter the claims processing system.
            The independent reviewers review the selected
            claims after they are paid or denied to ensure that the
            decision was appropriate. The outcome of the review is a
            provider compliance error rate and paid claims

                                         37
              error rate. The CERT Contractors are responsible for
              operating the CERT operations center and for gathering
              information from the [MAC] Contractor.

Id. at 428.

       Reduction of CERT rates is a priority for CMS, and one closely-related
priority is the reduction of fraud and abuse in Medicare claims, which can
contribute to high CERT rates. AR at 304-05, 427; Def.’s Mot. at 43. Jurisdiction
E contains California, a state identified in the RFP as an area of high Medicare
fraud and abuse. AR at 457. Thus, the TEP closely scrutinized the offerors’
proposals as to their Medical Review strategies and CERT reduction plans.

                     b.   TEP Ratings for Medical Review and CERT
                          Reduction

     Two weaknesses were assigned to CGS in this area. The first criticized
CGS’s CERT reduction plan:

              [ ].

AR at 5201. Although the parties dispute the robustness of CGS’s CERT
reduction plan, after consideration of these arguments and the text of CGS’s
proposal, the court finds nothing irrational in the weakness assigned by the TEP to
CGS’s CERT reduction plan. In the absence of any error of fact in the TEP report,
the court must defer to the technical expertise of the agency in this evaluation
finding.

       CGS also received a weakness for its Medical Review strategy. The TEP
report concluded that:

              [ ].

AR at 5201. This weakness contributed to CGS’s “numeric score of .6 for the
Technical Understanding factor.” Id. at 5202.

       CGS argues that its Medical Review strategy contained a large number of
specific approaches that controvert the TEP’s conclusion that CGS’s Medical

                                        38
Review strategy “was very general in nature.” AR at 5201. There are indeed
eighteen pages of Medical Review strategy in the CGS proposal, AR at 3444-61,
but the court must agree with defendant that CGS’s challenge to the weakness it
received for its Medical Review strategy “amount[s] to no more than a
disagreement with the agency’s exercise of its expertise and discretion in
evaluating the [offerors’] respective approaches to reducing Medicare
fraud” and other claims payment errors, Def.’s Mot. at 43. Both the TEP and the
SSA found a weakness in CGS’s Medical Review strategy, AR at 5201-02, 13899,
and provided a coherent and rational explanation for this finding; the assignment of
this weakness has not been shown to be irrational.

       Furthermore, the court sees nothing arbitrary or capricious in the agency’s
largely negative review of the Medical Review strategy in CGS’s proposal and the
largely positive review of the Medical Review strategy in Noridian’s proposal.
These are highly technical evaluation rating decisions to which the court must
defer. Having found no error of fact or reasoning in the TEP report or the SSM,
the court finds that CGS has not met its burden to show that the agency’s
evaluation of its Medical Review strategy and its CERT reduction plan was
irrational.

                   c.    Allegedly Disparate Treatment of a Dedicated CERT
                         Coordinator in the Proposals Submitted by CGS and
                         Noridian

       CGS relies heavily on the fact that its proposal contained a reference to a
dedicated CERT Coordinator, whereas only Noridian received a strength for
offering a dedicated CERT Coordinator. CGS Mot. at 29-30. It is certainly true
that both proposals identify personnel with the title CERT Coordinator. Compare
AR at 2369 with id. at 3460-61. The parties engage in a cumbersome analysis of
the Medical Review strategies proposed by CGS and Noridian in an attempt to
identify how many positions in each proposal were devoted to CERT Coordinator-
related activities. There are only two valuable insights to be gained from this
debate: (1) there are some differences in the constellation of staff positions
proposed by each offeror for Medical Review functions; and (2) the roles of these
staff persons are variously described.

      The TEP accorded Noridian a strength for its dedicated CERT Coordinator:


                                        39
            The TEP assigned a separate strength to the Offeror’s
            proposal to employ a dedicated CERT Coordinator to
            perform [ ]. This will allow [Noridian] to better analyze
            claim aberrancies and identify payment vulnerabilities.
            In addition to the coordinator position, the Offeror
            proposes [ ] above what CMS requires and demonstrates
            the Offeror respects the importance attached to being an
            accountable steward of Medicare trust fund dollars.

AR at 5192-93. The TEP also noted a similar strength in Noridian’s Medical
Review strategy, for proposing “a Medical Review Manager fully dedicated to the
JE contract, again demonstrating the Offeror’s awareness of Agency priorities, i.e.
reducing the CERT error rate in Jurisdiction E and addressing the high incidence of
fraud and abuse in this jurisdiction through the MR program.” Id. at 5192.

        The court finds nothing irrational or unequal in the award of these related
strengths to Noridian’s proposal. The TEP clearly reasoned that the personnel, and
roles for those personnel, in Noridian’s proposal exceeded contract requirements
for Medical Review. Although CGS insists that its proposal contained
substantially similar personnel features that should have been awarded a strength,
defendant persuasively argues that “CGS and Noridian . . . proposed different
approaches to these personnel, which supports the contracting officer’s decision to
treat them differently.” Def.’s Mot. at 45. The TEP report contains a rational
explanation of the strengths accorded Noridian in this regard, and justifiably
omitted mention of features in CGS’s proposal which simply met contract
requirements.

       The court concludes that CGS has not shown that its proposal deserved a
strength for the dedicated CERT Coordinator that CGS proposed. Having found
no error in the TEP’s analysis of Medical Review strategies, CERT reduction
plans, or CERT Coordinator functions, the court will not disturb the technical
ratings Noridian and CGS received in these areas. The SSA’s reliance on these
ratings for her best value award decision, AR at 13899, 13909-12, was therefore
unobjectionable.

            5.     [ ] versus EXACT



                                        40
        CGS asserts that it offered several “technologies for . . . managing
workflow” that were similar to those offered by Noridian, yet the TEP awarded
strengths only to Noridian and criticized CGS for its lack of “‘technical solutions
to manage the workload.’” CGS Mot. at 32 (quoting AR at 5202). As the primary
underpinning for its argument, CGS suggests that its [ ] was similar to Noridian’s
EXACT product.26 CGS also asserts that it offered a number of other technologies,
such as [ ], that should have earned CGS a better technical understanding rating.
Id. at 33-34; CGS Reply at 22-24. In CGS’s view, the disparate ratings of
technologies reflect disparate treatment.

       Defendant and Noridian respond that there was nothing irrational in the
TEP’s ratings of the technologies of the two offerors. Def.’s Mot. at 47-48;
Noridian Mot. at 40-41. The court finds that [ ] and EXACT, upon review of the
record and the arguments of the parties, are sufficiently different to justify the
distinctions drawn by the TEP in the technical understanding findings and ratings
for CGS and Noridian. Compare AR at 2500-01 with id. at 3432-33. Furthermore,
the agency’s expertise as to the relative value of these technologies is entitled to
deference. See, e.g., Omega World Travel, 54 Fed. Cl. at 578 (“It is well settled
that contracting officers are given broad discretion with respect to evaluation of
technical proposals.” (citing E.W. Bliss, 77 F.3d at 449)). As to the other
technologies offered by CGS, the record does not show that these products are of
such similar utility to those offered by Noridian that the TEP ratings for CGS’s
proposed workload processing technologies were infirm.27 The court finds no error
in the agency’s findings regarding the technologies proposed by CGS and
Noridian.28

       26
            / EXACT does not appear to be an acronym.
       27
         / To cite one example, CGS complains that Noridian should not have received a
strength for the automation of claims processing if CGS did not receive a strength for [ ]. CGS
Mot. at 33-34. But the strength awarded to Noridian was for EXACT’s ability “to automate the
claims suspense process,” whereas the description of [ ] in CGS’s proposal does not mention this
particular feature. Compare AR at 3435 with id. at 5193; see also id. at 2509-10 (offering a
detailed description of the proposed innovation of “Claim Suspense Process Automation”). The
court sees no error in the strength awarded to Noridian for its proposed innovation in claims
processing.
       28
      / Although CGS complains that not all of its technologies were discussed by the TEP,
CGS Mot. at 33 n.13, 34, there was no requirement that the TEP comment on every feature
                                                                                   continue...

                                               41
              6.     Business Rules Engines Proposed By Noridian and CGS

      In a similar vein, CGS argues that its “Business Rules Engine” was the
equivalent to the one proposed by Noridian, and that its business rules engine
should have received a strength in the TEP’s technical understanding evaluation.
CGS Mot. at 32-33; CGS Reply at 23. The parties’ arguments in this regard note
that Noridian’s business rules engine was integrated and functioning as part of
EXACT, whereas CGS’s business rules engine had not yet been implemented. See
AR at 3439. Having considered the parties’ arguments in this regard, and the
proposals that were before the agency, the court finds nothing irrational in the
strength awarded to Noridian for the functionality of EXACT and its integrated
business rules engine. See id. at 5193.

              7.     Technical Understanding Ratings Summary

       CMS has conceded one error in its rating of Noridian’s technical
understanding, that of the strength awarded to Noridian for integrating Medicare
Summary Notices (MSNs). That error, however, was not prejudicial to plaintiffs.
The other challenges to the agency’s technical understanding evaluation, as
discussed above, have not identified any flaws in that evaluation, under the
deferential standard of review applicable to an agency’s evaluation of highly
technical proposals. The court therefore rejects this category of protest grounds as
justifying any of the relief requested by plaintiffs.

       E.     Cost Adjustments

      The agency was required by the FAR to adjust, where necessary, the
proposed costs of the offerors’ proposals to arrive at the probable costs of the
proposals. See FAR 15.404-1(d)(2). All of the offerors’ proposed costs were
adjusted upward. Not surprisingly, plaintiffs argue that their probable costs were
unjustifiably inflated, whereas Noridian’s costs were not raised enough to reflect
Noridian’s probable costs. The court will examine plaintiffs’ principal arguments




       28
        / ...continue
mentioned in an offeror’s proposal. See supra.

                                                 42
in this regard,29 but notes at the outset that the adjustments of an offeror’s costs to
reflect probable costs is a highly technical endeavor. As long as there is a rational
explanation in the contemporaneous record for the cost adjustments performed by
CMS, the court will not second-guess the agency’s highly technical cost
adjustment decisions. See, e.g., Electro-Methods, 7 Cl. Ct. at 762 (noting the need
for judicial restraint when this court reviews highly technical procurement
decisions).

               1.     Provider Enrollment Productivity

       The MAC in Jurisdiction E will be responsible for enrolling providers into
the Medicare program. AR at 369-71. One of many steps in this process is the
entry of information into PECOS (Provider Enrollment, Chain Ownership System),
which is a CMS system. Id. at 369. The cost of the provider enrollment function
varies based on the number of provider enrollment applications the MAC can
process per day, a rate which the parties refer to as the productivity rate.30 An
adjustment lowering an offeror’s proposed productivity rate, which the BEP
determined was necessary for each of the offerors in this procurement, results in an
increase in the offeror’s probable costs (more hours of labor are required, for
example) for the contract. As stated supra, plaintiffs argue that the adjustments to
the offerors’ proposed productivity rates were not reasonable.

        As a threshold matter, the court surveys the information relied upon by the
agency to make cost adjustments to the provider enrollment productivity rates.
First, the offerors provided their proposed productivity rates. For Noridian, this
rate was [ ] applications per day. For FCSO, the subcontractor for both CGS and
Palmetto, this rate was [ ] applications per day. Second, the agency had Noridian’s
self-reported historical productivity rate of [ ] applications per day. FCSO did not
self-report its historical productivity rate, but stated that its proposed productivity


       29
          / CGS devoted nine pages of its opening brief and eleven pages of its reply brief to
three distinct cost adjustment challenges. Palmetto was more restrained in the number of its cost
adjustment challenges, but it, too, devoted a substantial number of pages to its cost adjustment
arguments. The court has considered all of these arguments (and their sub-parts), and finds them
to be unavailing. This opinion section attempts to address each substantive challenge to the
agency’s cost realism analysis.
       30
         / Only the productivity rate for Medicare Part B provider enrollment is at issue in this
bid protest.

                                                43
rate was “based upon actual MAC J1 experience adjusted for enhancements to the
provider enrollment process as outlined in the RFP and FCSO initiated process
improvements.” AR at 4410. Third, the agency’s experts (the SME and the TCA)
provided an average provider enrollment productivity rate of 12 applications per
day, derived from national data but weighted to reflect the workload in Jurisdiction
E. Id. at 11303. Fourth, the agency possessed information that efficiency gains
attributed to RapidApp, the technological innovation proposed by Noridian to
improve its historical productivity rate, were diminished because of changes to
PECOS.31

       Although the parties strongly dispute the methodology utilized by the BEP
to arrive at probable productivity rates for the offerors, the record is clear that
Noridian and FCSO were both determined to have a probable provider enrollment
productivity rate of 12 applications per day. The court will consider first whether
the agency committed the error alleged by plaintiffs, i.e., that CMS substituted a
“should have bid” productivity rate for an adequately supported FCSO-specific
productivity rate, an error that CGS refers to as the improper normalization of
proposed costs. See CGS Mot. at 17 (citing Univ. Research Co. v. United States,
65 Fed. Cl. 500, 509, 512 (2005)). The court will then consider whether the 12
applications per day probable productivity rate assigned to Noridian was irrational,
because of improper normalization or other flaws in the agency’s cost realism
analysis.

                      a.      FCSO’s Provider Enrollment Productivity

       Plaintiffs argue that FCSO, a national leader among MACs in provider
enrollment, adequately substantiated its proposed productivity rate of [ ]
applications per day for Jurisdiction E. Plaintiffs note further that productivity
rates vary widely between MACs. See AR at 11302, 11861, 15871. Thus,
plaintiffs argue, it was improper and irrational to substitute an average productivity
estimate for FCSO’s proposed productivity rate.

       Plaintiffs also point to the enhancements FCSO proposed to improve its
historical productivity rate. One of these enhancements is its [ ], instituted in 2008.


       31
          / RapidApp is discussed at some length in the section of this opinion devoted to
plaintiffs’ challenges to the agency’s evaluation of the offerors’ technical understanding. See
supra.

                                                44
AR at 3434, 4401. [ ] was reported to have cut hours in provider enrollment in
Jurisdiction 1, and further efficiencies involving [ ] were predicted. Id. [ ]
received positive commentary in an email composed by the SME. See id. at 12389
(“[ ].”).

       Defendant and Noridian argue, however, that CMS properly relied upon the
agency’s 12 applications per day estimate in determining FCSO’s probable
productivity rate. First, they point out that FCSO did not disclose its historical
productivity rate for Jurisdiction 1, despite the warning in the RFP that CMS
would not necessarily retrieve information that was not included in the offerors’
proposals. Noridian Mot. at 7-8; Def.’s Mot. at 51 (citing AR at 278). Second,
they argue, relying on the contemporaneous record of the BEP’s cost realism
analysis, that CMS did not possess historical productivity data for FCSO. See AR
at 11303. Third, Noridian argues that the characterization of FCSO as a national
leader in provider enrollment is not synonymous with an above-average
productivity rate. Fourth, Noridian argues that [ ], in place since 2008, is already
represented in FCSO’s historical productivity rate and thus should not be construed
as support for a higher-than-historical proposed productivity rate. Noridian Reply
at 6. For all of these reasons, defendant and Noridian urge the court to find that the
agency’s decision to lower FCSO’s productivity rate to the weighted average of 12
applications per day was reasonable.

       The court concludes that CMS rationally considered the information
available to it and determined that FCSO’s proposed productivity rate was too
high. Although the efficiency of [ ] was warmly praised by the SME, he also noted
[ ]. AR at 12389. The TEP, which assisted the BEP in the cost realism analysis,
reported that [ ] would not raise FCSO’s productivity rate. The TEP’s commentary
in this regard is conclusory and does not capture every nuance of the SME’s email,
but it does not appear to the court that the TEP expressed an irrational conclusion
as to FCSO’s probable productivity rate:

             In the TEP’s efforts to determine whether the proposed
             productivity rates are reasonable, it consulted with the PE
             SME. The subcontractor, First Coast Service Options,
             proposed an innovation in provider enrollment called [ ],
             however, the SME did not consider it a tool that would
             increase productivity but rather a tool to [ ] throughout
             the process. As a result, the TEP concluded there were

                                         45
                no innovations or other technical proposals to support the
                Offeror’s proposed productivity and therefore
                recommends an adjustment that used the most recent
                baseline numbers provided by the SME and calculating a
                weighted average of those baseline numbers with which
                to compare the proposed productivity.

Id. at 11303.

       To the extent that the TEP failed to fully quote the SME’s email, the court
finds this error to be de minimis. The SME did not opine that [ ] would raise
FCSO’s historical productivity rates for Jurisdiction 1. The BEP reasonably
questioned the [ ] applications per day productivity rate proposed by FCSO, and
reasonably determined that 12 applications per day was FCSO’s probable
productivity rate. The cost adjustments based on FCSO’s probable productivity
rate for provider enrollment were rational, according to the record that was before
the agency and that is now before this court.

                      b.    Noridian’s Provider Enrollment Productivity

        Noridian’s proposal stated that, largely due to RapidApp, it would improve
its historical productivity rate from approximately [ ] applications per day to a
proposed productivity rate of [ ] applications per day. AR at 2502. The BEP
report acknowledged that Noridian proposed to process [ ] applications per day; the
agency then proceeded to determine whether that figure was reasonable. Id. at
11022. The BEP report noted “the TEP’s uncertainty regarding when RapidApp
may be deployed in JE and to what extent.” Id. The TEP decided that it could not
accept Noridian’s proposed [ ] applications per day productivity rate. Id. After
considering available adjustment options, the weighted average of 12 applications
per day was adopted as “the preferred method.” Id.

       The parties review contemporaneous emails regarding RapidApp, which the
court has discussed briefly earlier in this opinion. The dominant opinion at CMS,
as expressed by the SSA, was that portions of RapidApp would continue to have
some utility. Although it is evident that the SME could not precisely quantify the
potential productivity gains attributable to RapidApp in Jurisdiction E, see AR at
11022 (stating that “the SME is unable to estimate a level of productivity savings
[from RapidApp] going forward”), it was not irrational for CMS to reduce

                                            46
Noridian’s proposed productivity rate, citing efficiencies largely attributed to
RapidApp, from [ ] applications per day to 12 applications per day.32 Although the
court (and certainly plaintiffs) might have calculated a different productivity rate
for Noridian, plaintiffs have not shown that a productivity estimate of 12
applications per day was an irrational basis for the cost adjustment made to
Noridian’s proposal for provider enrollment.33 Plaintiffs’ challenge to the cost
adjustments made to the offerors’ proposals for provider enrollment must therefore
be rejected. See, e.g., Honeywell, 870 F.2d at 648 (“‘If the court finds a reasonable
basis for the agency’s action, the court should stay its hand even though it might,
as an original proposition, have reached a different conclusion as to the proper
administration and application of the procurement regulations.’” (quoting M.
Steinthal, 455 F.2d at 1301)).

               2.     Noridian’s Audit and Reimbursement Cost Adjustment

       CGS argues that Noridian’s Audit and Reimbursement costs, although
adjusted upward, were not adjusted enough. CGS Mot. at 21-22. CGS relies in
particular on an email from the SME which suggested that she used approximately
150,000 total Audit and Reimbursement hours to evaluate proposals. Noridian
proposed approximately [ ] hours for Audit and Reimbursement. AR at 2743.
CMS adjusted those hours to approximately [ ] hours. Id. at 11032. CGS
complains that Noridian’s Audit and Reimbursement costs were not “properly
increased.” CGS Mot. at 22. In its reply brief, CGS argues that the BEP’s total


       32
          / The SSA described this reduction in Noridian’s productivity rate as a reduction of
50% in the “proposed productivity increase.” AR at 13895 (“[W]hen assessing the cost realism
of [Noridian’s] proposed productivity increases due to RapidApp, the TEP only accepted 50% of
the proposed productivity increase.”). This, in the court’s view, is not a mathematically accurate
statement. Noridian’s proposed productivity increase was an additional [ ] applications per day
(from [ ] applications per day to [ ] applications per day). Id. at 2502. The probable productivity
increase determined by the BEP was an additional [ ] applications per day (from [ ] applications
per day to 12 applications per day). Thus, the BEP accorded Noridian a productivity increase of
approximately [ ]% of its proposed productivity increase, not 50%.
       33
         / The court notes that the SME described provider enrollment as “a constantly changing
area.” AR at 12389. The court further notes that changes in PECOS were increasing the
automation of provider enrollment. See id. at 13828-29. This evidence of instability and
uncertainty in provider enrollment processes, in the court’s view, supports the agency’s reliance
on an estimate based on a weighted average productivity rate produced by the SME and the
TCA.

                                                47
adjustment to Noridian’s Audit and Reimbursement costs was “unexplained.”
CGS Reply at 12.

       Defendant and Noridian contend that the individual cost adjustments to
particular aspects of Noridian’s Audit and Reimbursement proposal were
reasonable. Defendant argues that the record provides a rational explanation for
the adjustments made to Noridian’s Audit and Reimbursement costs, and that the
SME’s 150,000 estimate for total labor hours should be viewed as a “presumptive
baseline,” rather than as a minimum acceptable threshold of labor hours for Audit
and Reimbursement activities. Def.’s Mot. at 52; see also Def.’s Reply at 23-25.
The court must agree with defendant, for the following reasons.

       First, both CGS and defendant note that there were some disagreements
between the Audit and Reimbursement cost adjustments proposed by the SME and
those proposed by the TCA. See, e.g., AR at 11033. Second, the Source Selection
Plan did not require that the BEP adopt all SME recommendations. See id. at
2280-81. Third, the BEP considered the input of the SME but nonetheless adopted
specific cost adjustments and a total cost adjustment recommended by the TCA.
Upon review of the BEP report regarding Noridian’s proposal, and after full
consideration of the significance of the 150,000 total labor hours estimate
produced by the SME, the court finds nothing irrational in the adjustments to
Noridian’s Audit and Reimbursement costs. In other words, the individual and
total cost adjustments endorsed by the BEP were not rendered irrational by the
higher total hours estimate provided by the SME. Although CGS complains that
the adjustments for Noridian’s Audit and Reimbursement costs are not adequately
explained, the record is sufficiently clear to show the rationality of the agency’s
decision in this regard. See, e.g., Bowman, 419 U.S. at 286 (stating that a court
should “uphold a decision of less than ideal clarity if the agency’s path may
reasonably be discerned”) (citation omitted).

            3.     CGS’s Overhead and G&A Cost Adjustments

      CGS’s final argument regarding the agency’s cost realism analysis is that
CMS erred when it adjusted CGS’s Overhead and General & Administrative
(G&A) rates upward. CGS Mot. at 22-23; CGS Reply at 12-13. CGS
acknowledges that its proposed rates were lower than some of its historical rates,
but suggests that a larger business base would justify these proposed rates if CGS
were awarded the Jurisdiction E MAC contract. CGS also contends that it

                                        48
calculated its proposed rates by using the formula provided in the RFP. CGS
therefore concludes that the cost adjustments for its Overhead and G&A were
irrational.

       Noridian argues that the agency employed “reasonable skepticism” in its
review of CGS’s proposed rates, and that CGS has not shown that this skepticism
was irrational. Noridian Mot. at 45-46. Defendant notes that CMS was cognizant
of the business base increase inherent in CGS’s proposal, but nonetheless fully
explained why CGS would likely bill at higher rates than those proposed. Def.’s
Mot. at 53-54. The question before the court is whether there is a rational
explanation in the record for the adjustments to CGS’s Overhead and G&A rates.

       The court has reviewed the record pages cited by the parties that evidence
the basis for the upward adjustment of CGS’s Overhead and G&A rates. See AR at
11264-66, 11273-77. The BEP noted that CGS expected that a post-award
expanded business base would decrease its Overhead and G&A rates. Id. at 11264,
11274, 11276. The BEP also noted, however, that CGS had recently billed at [ ]
rates for Overhead and G&A. Id. at 11265, 11276. The BEP rationally concluded
that a cost adjustment to CGS’s Overhead and G&A rates was warranted.

       Although CGS disagrees with the agency’s factual analysis of the billing
data highlighted in the BEP report, see CGS Reply at 13 (suggesting that CGS’s
billing history shows decreasing rates), CGS has not shown that the agency’s
reasoning was infirm. As for the agency’s decision to adjust CGS’s rates upward,
the court defers to the agency’s technical expertise in this regard. The court
therefore sees no reversible error in the agency’s cost adjustments to CGS’s
Overhead and G&A rates.

             4.    The SSA’s Alternative Tradeoff Analysis

        As a final note regarding the agency’s cost realism analysis, the parties have
debated the value of the SSA’s alternative trade-off analysis. In the alternative
trade-off analysis, the SSA stated that she would have been willing to pay a
significantly higher price premium to obtain the advantages of Noridian’s proposal,
i.e., the price premium represented by the difference between Noridian’s adjusted
costs and CGS’s unadjusted costs. AR at 13911. CGS relies on a GAO decision
for the proposition that such alternative analyses fail as a matter of law because
they are “‘unsupported by specific analysis.’” CGS Reply at 10 n.5 (quoting

                                         49
Boeing Co., B-311344, 2008 CPD ¶ 114, 2008 WL 2514171, at *46 n.88 (Comp.
Gen. June 18, 2008)). Noridian correctly notes that the Boeing decision is
distinguishable on its facts, and cites another GAO decision which did not reject an
alternative trade-off analysis as inconsequential. See Noridian Reply at 8 (citing
Noridian Admin. Servs., LLC, B-401068.13, 2013 CPD ¶ 52, 2013 WL 427848, at
*4 n.9 (Comp. Gen. Jan. 16, 2013)). Although the court does not rest its decision
in this bid protest on the SSA’s alternative trade-off analysis, that alternative
analysis clearly lends support to the court’s conclusion that no prejudicial error
occurred in this procurement.

                                 CONCLUSION

       The court has considered all of plaintiffs’ challenges to the SSA’s award
decision, and finds that no significant errors marred this procurement, except for
one technical evaluation error which was not prejudicial to plaintiffs. This court
will not substitute its judgment for the technical expertise of the agency in the
absence of significant, prejudicial error. See, e.g., Alabama Aircraft, 586 F.3d at
1376. The court concludes that the agency’s best value award decision has not
been shown to be arbitrary, capricious, an abuse of discretion, or contrary to law,
and that the award of the contract to Noridian must stand. Because plaintiffs have
not succeeded on the merits of their protest, the court need not consider whether
the standard for injunctive relief has been met in this case.

      Accordingly, it is hereby ORDERED that

      (1)   Plaintiffs’ Motions for Judgment on the Administrative Record, filed
            February 22, 2013, are DENIED;

      (2)   Defendant’s and Intervenor-Defendant’s Cross-Motions for Judgment
            on the Administrative Record, filed March 11, 2013, are GRANTED;

      (3)   The Clerk’s Office is directed to ENTER final judgment in favor of
            defendant and intervenor-defendant, dismissing the complaint with
            prejudice;

      (4)   On or before May 3, 2013, counsel for the parties shall CONFER and
            FILE with the Clerk’s Office a redacted copy of this opinion, with
            any material deemed proprietary marked out and enclosed in brackets,

                                         50
      so that a copy of the opinion can then be prepared and made available
      in the public record of this matter; and

(5)   Each party shall bear its own costs.


                                       /s/Lynn J. Bush
                                       LYNN J. BUSH
                                       Judge




                                  51
