Table of Contents >> Show >> Hide
- The False Claims Act, in plain English
- Why cybersecurity suddenly fits the FCA
- What recent cybersecurity FCA settlements have in common
- A quick tour of headline examples (and what they signal)
- Aerojet Rocketdyne: cybersecurity representations can drive FCA exposure
- Verizon (MTIPS/TIC controls): cooperation credit is real, but so is the bill
- Penn State: universities and research ecosystems are not immune
- HNFS/Centene: ignored warnings and delayed remediation are recurring themes
- MORSECORP: scores, SSPs, and third-party services can become Exhibit A
- Georgia Tech Research Corporation: “cyber-defense research” still needs basic defenses
- Illumina: product security and quality systems enter the FCA chat
- The three buckets of cyber-FCA theories
- The new compliance math: CMMC and the rise of the “affirmation economy”
- Practical risk-reduction moves that actually work
- 1) Build a contract-to-control map (and keep it updated)
- 2) Treat assessments and scores like sworn statements
- 3) Operationalize vulnerability management
- 4) Make third-party oversight boringly rigorous
- 5) Align legal, security, procurement, and finance
- 6) Create a culture where bad news travels fast
- Common myths that get organizations in trouble
- Conclusion: cybersecurity promises are now payment promises
- Experiences from the field: what this shift feels like inside organizations (about )
- SEO Tags
There was a time when “False Claims Act (FCA)” sounded like a niche legal thunderstorm reserved for Medicare billing,
defense pricing, and contractors who thought “creative accounting” was a personality trait. Then cybersecurity showed up,
kicked the door open, and said: “Hi. I’m also a contract term. Pay attention.”
Today, the U.S. government is increasingly treating cyber requirements like any other core promise in a federal deal:
if you certify you meet themand you don’tthat can become an FCA problem. Not because the government suddenly expects
perfection, but because it expects honesty about what it’s paying for. And the receipts aren’t just invoices anymore.
They’re policies, assessments, system security plans, vulnerability scans, incident reports, and the occasional “Yes, we
totally implemented that control” checkbox that someone clicked with the confidence of a golden retriever chasing a laser pointer.
Let’s unpack what’s driving this shift, what recent cybersecurity-related FCA settlements reveal, and how organizations can
reduce risk without turning their security program into a joyless museum exhibit labeled “DO NOT TOUCH.”
The False Claims Act, in plain English
At its core, the FCA is the government’s main civil tool for going after fraud involving federal funds. If an organization
knowingly submits (or causes the submission of) false or fraudulent claims for payment, or makes false statements material
to a claim, the FCA can come into play. The “claim” can be a literal invoice, but it can also be tied to certifications and
representations that the government relies on when it paysor even when it awards the contract in the first place.
What counts as “false” in FCA land?
“False” isn’t limited to made-up numbers. It can include half-truths and misleading certificationsespecially where
compliance with a requirement is material to the government’s decision to pay. In cybersecurity, that often means:
(1) contract clauses that require specific controls or standards, (2) a certification or assessment score that says you’ve met them,
and (3) evidence (sometimes internal evidence) that you didn’t.
The “knowing” part matters
The FCA isn’t supposed to punish honest mistakes. The law focuses on “knowing” conductthink actual knowledge, deliberate
ignorance, or reckless disregard. In cybersecurity, risk tends to spike when leaders have repeated warnings (internal audits,
third-party reports, failed scans, ignored tickets, “temporary” exceptions that celebrate their fifth birthday) and still
certify compliance as if everything’s fine.
Whistleblowers are a feature, not a bug
One reason cybersecurity fits the FCA so well is the FCA’s whistleblower (qui tam) provisions. Insiders with technical
knowledge can bring claims on the government’s behalf and potentially share in recoveries. Cyber programs generate lots of
“paper”and logs, and tickets, and audit trailswhich can become evidence when a dispute is really about whether the story
told to the government matched reality.
Why cybersecurity suddenly fits the FCA
Two big things changed:
- The government buys digital services and data-heavy capabilities at massive scale. Security isn’t a nice-to-have; it’s part of what’s being purchased.
- Cyber requirements became explicit, measurable contract terms. Standards like NIST SP 800-171 and programs like CMMC (for defense contractors) turned “good security” into concrete obligations.
Once cybersecurity becomes a contractual requirement, misrepresenting compliance starts to look less like “IT drama” and
more like “fraud risk.” In other words: if you promised seatbelts and delivered vibes, the government may argue it didn’t get
what it paid for.
The Civil Cyber-Fraud Initiative: FCA meets MFA
The Department of Justice (DOJ) has made its intent clear: it will use the FCA to pursue entities that knowingly provide
deficient cybersecurity products or services, knowingly misrepresent cybersecurity practices, or knowingly violate obligations
to monitor and report cyber incidents. That framing matters because it expands the conversation beyond “you lied on an invoice”
into “you lied about security controls, product security, or reporting duties.”
What recent cybersecurity FCA settlements have in common
Recent settlements read like a greatest-hits album of avoidable pain. Not “a nation-state did something magical,” but
“we said we did the thing, and we didn’t do the thing.” Here are patterns that show up again and again.
1) Certifications and scores become legal promises
In the defense ecosystem, contractors may be required to implement NIST SP 800-171 controls and submit assessment scores.
Problems begin when scores (or timelines to fix gaps) are inflated, outdated, or not grounded in real systems.
For example, DOJ’s allegations in multiple matters have centered on the idea that the government relied on reported compliance
status, scores, and security planning as part of the bargainespecially where submission of an assessment score was connected
to contract eligibility or payment.
2) “We’ll fix it later” is not a compliance strategy
Plans of action and milestones (POA&Ms) can be legitimate toolswhen they’re real plans with owners, budgets, and deadlines.
But when they become a parking lot for permanent gaps, they can be framed as evidence of knowing noncompliance. The vibe of
“we’ll get to it after Q4” doesn’t age well when the contract period covers multiple Q4s.
3) Third-party services don’t outsource accountability
A common trap: relying on a third party (cloud, email hosting, managed services) without ensuring the provider meets required
baselines or reporting obligations. If your contract requires certain standards, the government may expect you to manage that
risknot just point at a vendor and shrug like a sitcom character.
4) Vulnerability scanning and remediation are not “optional nice-to-haves”
Some allegations focus on failure to timely scan for known vulnerabilities, failure to remediate security flaws, and ignoring
auditor or internal warnings. In the FCA context, the risk isn’t merely that vulnerabilities existedit’s that the organization
represented compliance while failing to do what it promised to do.
5) Product cybersecurity is now on the menu
The FCA story isn’t limited to “contractor networks.” It is increasingly relevant to products sold to the government,
especially where security representations are made about design, development, testing, monitoring, and adherence to standards.
That’s a major expansion in practical exposure: cybersecurity is no longer just your IT team’s problem; it’s also your product,
engineering, quality, and vendor management problem.
A quick tour of headline examples (and what they signal)
The point here isn’t to rubberneck. It’s to learn what types of facts are showing up in real resolutionsand what DOJ appears
to view as material.
Aerojet Rocketdyne: cybersecurity representations can drive FCA exposure
A widely cited early example involved allegations that a major defense contractor misrepresented compliance with cybersecurity
requirements in federal contracts. It also highlighted how whistleblowers with technical expertise can shape FCA enforcement in
this space.
Verizon (MTIPS/TIC controls): cooperation credit is real, but so is the bill
One settlement involving a federal IT service provider alleged failure to fully satisfy certain cybersecurity controls tied to
Trusted Internet Connections requirements. Notably, DOJ also emphasized cooperation steps like self-disclosure, independent
investigation, and remediationsignaling that how you respond can influence outcomes.
Penn State: universities and research ecosystems are not immune
Another settlement underscores that federally funded research environments can face FCA scrutiny when they handle covered defense
information or similar sensitive data under DoD/NASA contracts. Allegations included failure to implement required controls,
inadequate action plans, and issues involving external cloud providers meeting required security baselines.
HNFS/Centene: ignored warnings and delayed remediation are recurring themes
In a major healthcare-adjacent federal contract context, allegations included false certification of cybersecurity compliance and
failures to timely scan and remediate vulnerabilitiesplus ignoring external and internal audit warnings. It’s a reminder that FCA
cyber risk is not limited to “defense contractors in camo hoodies.”
MORSECORP: scores, SSPs, and third-party services can become Exhibit A
In a defense contracting context, allegations included use of a third-party email host without ensuring equivalent security
requirements, incomplete implementation of NIST SP 800-171 controls over multiple years, lack of a consolidated system security
plan, and submission of an assessment score that was later contradicted by a third-party consultant’s findings. That mixcontrols,
documentation, scoring, and delayed correctionmaps neatly onto the FCA’s “knowing misrepresentation” theory.
Georgia Tech Research Corporation: “cyber-defense research” still needs basic defenses
A settlement tied to sensitive cyber-defense research allegations described failures such as not installing/updating/running
anti-malware tools and lacking a system security plan for a lab environmentplus allegedly false assessment scoring. The irony is
painful, but the legal lesson is clear: “We do cybersecurity research” is not the same as “we complied with our cybersecurity requirements.”
Illumina: product security and quality systems enter the FCA chat
Another settlement is notable for focusing on alleged cybersecurity vulnerabilities in systems sold to federal agencies and
alleged deficiencies in security programs and quality systems to identify and address those vulnerabilities. The allegations also
emphasize security across the product lifecycledesign, development, installation, and on-market monitoringpointing toward a future
where “secure-by-design” representations are treated like any other contractual promise.
The three buckets of cyber-FCA theories
Most cybersecurity FCA matters tend to fit into three practical buckets (sometimes overlapping like a Venn diagram drawn by someone
who drank too much cold brew):
Bucket 1: “We said we were compliant.” (But weren’t.)
This is classic misrepresentation. The contract requires controls; you certify or imply you’ve implemented them; evidence shows you
didn’t. The highest-risk moments often occur during contract award, annual attestations, assessments, or renewalswhen someone must
turn reality into a neat sentence.
Bucket 2: “We sold secure products/services.” (But didn’t deliver what we promised.)
This bucket is expanding. If you represent that your product meets certain security standards or follows secure development practices,
and internal facts show those claims were knowingly untrue or recklessly made, DOJ may argue the government paid for security that it
didn’t receive.
Bucket 3: “We’ll report incidents and monitor.” (Unless it’s inconvenient.)
Some contracts and frameworks include duties to monitor, report incidents, preserve evidence, and cooperate with forensic reviews.
The risk here isn’t only the incident; it’s a failure to meet reporting and response obligations after representing you would.
The new compliance math: CMMC and the rise of the “affirmation economy”
In the defense industrial base, CMMC is turning cybersecurity compliance into a structured, phased program with increasing reliance on
assessments, self-assessments, and attestations. When a program explicitly reminds contractors to submit affirmations with assessments,
it’s a clue that attestations matter. And when attestations matter, FCA risk can follow if the affirmations aren’t true.
The key takeaway: as CMMC requirements roll out, organizations should treat every assessment score, affirmation, and certification as a
statement that may be reviewed later with fresh eyesand potentially by people who do not laugh at your “temporary exception” spreadsheet.
How to avoid turning your SSP into a legal thriller
You don’t need a perfect program. You need a defensible one. That means being able to show:
- What requirements apply (by contract, system, and data type).
- What you implemented (controls, configurations, monitoring).
- What you didn’t implement (with honest scoring and documented rationale).
- How you’re closing gaps (real POA&Ms with owners, budgets, and dates you intend to keep).
- How you validated (testing, scans, audits, evidence retention).
Practical risk-reduction moves that actually work
Below are concrete steps that tend to matter in real-world cyber-FCA risk management. They’re not glamorous. They are effective.
Think of them as “security vegetables”you may not crave them, but your future self will thank you.
1) Build a contract-to-control map (and keep it updated)
Identify which contracts require which standards (NIST SP 800-171, incident reporting clauses, FedRAMP-related requirements, TIC controls,
etc.). Then tie those obligations to specific systems and data flows. You can’t be compliant in the abstract. Compliance lives where the
data lives.
2) Treat assessments and scores like sworn statements
If you submit scores or affirmations, ensure they are evidence-backed. If something changes, update promptly. If a third-party review
reveals a major discrepancy, treat it as a governance eventnot a calendar invite you keep rescheduling until everyone dies.
3) Operationalize vulnerability management
Many cyber problems are not mysterious. They are unpatched systems, misconfigurations, and alerts that no one owned. Establish scanning
cadence, patch SLAs, exception approval, and executive visibility for chronic backlog. If you must accept risk, document why, for how long,
and what compensating controls exist.
4) Make third-party oversight boringly rigorous
If you outsource email hosting, cloud, logging, or security tooling, verify that the service meets required baselines and reporting obligations.
Bake security requirements into contracts, monitor compliance, and keep evidence. “Our vendor said it was fine” is not the shield some people
think it is.
5) Align legal, security, procurement, and finance
Cyber-FCA risk lives in the space between departments: security knows the gaps, contracts define the promises, legal interprets obligations,
and finance submits claims for payment. Put them in the same room often enough that “What did we certify?” stops being a trick question.
6) Create a culture where bad news travels fast
Many FCA cases are fueled by the perception that concerns were raised and ignored. Make internal reporting safe, track remediation, and ensure
leadership sees recurring issues. The goal is not to punish the messenger. The goal is to avoid becoming the headline.
Important note: This article is general information, not legal advice. If your organization has federal contracts or sells to federal agencies,
consult qualified counsel and compliance professionals about your specific obligations.
Common myths that get organizations in trouble
Myth 1: “It only matters if we get breached.”
Many allegations focus on noncompliance and misrepresentation, not just incidents. A breach can make things worse, but it’s not always required
for FCA exposure if the government argues it paid for compliance it didn’t receive.
Myth 2: “Our SSP is good enough because it exists.”
A security plan is not a magic charm. Plans help when they reflect reality and guide action. They hurt when they’re aspirational fiction that
contradicts what your tools, logs, and audits show.
Myth 3: “We can fix the score later.”
Updating later can be appropriateif it’s prompt and transparent. But long delays, especially after receiving contrary evidence, can look like
reckless disregard or deliberate avoidance. Timing becomes part of the story.
Myth 4: “Cyber is IT’s problem.”
Cyber-FCA risk is an enterprise problem: contracts, compliance, product engineering, quality, vendor management, and executive oversight all play
roles. If you sell to the government, your cybersecurity posture is part of what you deliverlike uptime, accuracy, and performance.
Conclusion: cybersecurity promises are now payment promises
The big shift is simple: the FCA is increasingly being used to test whether cybersecurity representations were truthful and whether contractual cyber
requirements were treated as essential terms. Settlements across contractors, research environments, service providers, and product companies suggest that
“cyber compliance” is no longer a compliance side quest. It’s part of the deal.
Organizations that want to stay out of trouble don’t need to pretend they’re perfect. They need to (1) know their obligations, (2) build a program that
actually meets them (or transparently documents gaps), (3) validate with evidence, and (4) tell the truthespecially when signing their name to assessments,
scores, and certifications. Because in this era, a cybersecurity checkbox can be “just a checkbox”… right up until it’s Exhibit A.
Experiences from the field: what this shift feels like inside organizations (about )
In many organizations, the “FCA meets cybersecurity” moment doesn’t arrive with sirens. It arrives with a spreadsheet.
Specifically, a spreadsheet full of controls, owners, due dates, and a column titled something like “Compliant (Y/N).”
That column looks harmlessuntil someone asks, “Are we comfortable putting this in a certification to the government?”
and the room suddenly develops the quiet tension of a Jenga tower on its last block.
Teams often describe the early stage as an “interpretation phase.” Security reads the contract language and sees requirements.
Contracts teams read the same language and see “standard clauses.” Engineering reads it and sees “later.”
Then an assessment deadline appears, and everyone realizes they’ve been living in the same house but using different maps.
The practical work starts: scoping which systems actually store or touch covered information, tracing data flows that nobody’s
diagrammed since the Obama administration, and discovering that “temporary” exceptions are the most permanent structures in corporate history.
The biggest culture change is that security evidence begins to matter like financial evidence. People who used to think in terms of
“best efforts” get pulled into “prove it” mode. Vulnerability tickets are no longer annoying chores; they’re time-stamped artifacts.
Internal audit reports stop being bedtime stories and start looking like potential evidence trails. Even routine decisionslike delaying
patching because a system is fragilecan become important if the organization later needs to explain how it balanced mission needs, compensating
controls, and contractual requirements.
Organizations that adapt well tend to make a few mindset shifts. First, they treat assessments and scores as governance events rather than
paperwork. That means slowing down long enough to align leadership on what is true, what is not yet true, and what will be doneby whento
close the gap. Second, they stop relying on “tribal knowledge.” If only one person knows how a requirement is met, then it’s not really met;
it’s being held together by a single human duct tape roll. Third, they build a steady rhythm: scan, remediate, validate, document, repeat.
Boring? Yes. Effective? Also yes.
A common “aha” moment is vendor reality. Many organizations assume that buying a reputable third-party service automatically satisfies
requirements. Then they learn the hard truth: the contract may require specific baselines, incident reporting pathways, or audit cooperation,
and “big-name vendor” is not a control. The teams that thrive build vendor checklists that mirror their contractual obligations and keep evidence
of due diligenceso they can confidently say, “Here’s how we verified this,” instead of “We assumed it was fine.”
Finally, there’s a human lesson: people raise concerns before they raise lawsuits. When organizations create a culture where issues can be escalated
and fixedwithout punishing the messengerthey reduce the odds that frustration turns into external action. In the cyber-FCA era, good faith effort
looks like transparency, documentation, and real remediation. The goal is not to be flawless. The goal is to be truthful, prepared, and consistently improvingso your
cybersecurity program tells the same story your certifications do.