Chickenboy
Posts: 24520
Joined: 6/29/2002 From: San Antonio, TX Status: offline
|
quote:
ORIGINAL: witpqs quote:
ORIGINAL: Chickenboy I guess that's my point. Little is known about typical 'clearance' of the virus relative to production of detectable antibodies. Is it possible that someone that is seropositive has a titer that is insufficient to protect against future challenges? Yes. Is it possible that someone that thinks they're antibody negative has only started with the disease and therefore hasn't had time to produce a detectable serologic response (this takes several days)? Yes. Is it possible that someone that is antibody positive could be shedding virus? Yes. *Maybe* paired samples-RT-PCR and simultaneous antibody screening could provide a more useful picture. Antibody positive and simultaneous virus negative is a greater margin of safety IMO. We run into this from time to time with animal disease outbreaks. Serosurveillance is nice for planned, routine screens of a population for diseases that one should not find antibody (or to quality check vaccination strategies when a titer *is* expected). But there is an inherent delay between the introduction of a pathogen (e.g., avian influenza) and production of detectable antibodies. Depending on the test and the disease, this could be 48-96 hours or more. Performing antibody serosurveillance as a quasi-realtime diagnostic tool in a rapidly evolving disease outbreak is usually selecting the wrong tool for the task. Unless there was considerable (2+ days) delay in getting antigen testing results back, there are better choices. Rapid antigen (read: virus) detection kits can be used as a proxy for virus isolation or PCR in a pinch. But antibody detection tests require too many assumptions about temporal exposure to be proactive in a timely fashion, IMO. Lots of folks / companies trying to dump test kits or treatments or cures on a panicked public. I question the rationale of this antibody detection implementation without deeper thought about whether some better choices may exist. Is this true of any antibody test? Is it a well understood percentage of results? Would such a test still be useful for determining who *very likely* has immunity? Thanks. Not all serologic titer profiles are the same. In animal populations with 'typical' vaccine regimens, we look for what is a consistent vaccination response relative to what they have been vaccinated with and the portional dose/strain/methodology of vaccine/vaccination. Massive titer spikes beyond the norm typically reflect either vaccination errors or 'field strain' exposure. In diseases for which we do not vaccinate, *any* serologic response is a de facto field strain exposure. Not all serologic responses are the same to exposure to field strains. Some animals that get really sick from a virus and recover will likely have a more robust serologic response to that agent than those whose immune system barely 'recognizes' the virus as a credible threat upon initial exposure. Same if they're immunocompromised. It would not surprise me if asymptomatic COVID-19 patients had a different serologic profile than those that got really sick and recovered. In the case of COVID-19 seroconversion, I haven't heard 'boo' about comparing and contrasting expected serologic responses to different patients' clinical presentation. And nothing more substantive on differentiating a protective anamnestic response versus reaction to initial exposure. So, yeah-it's entirely possible (maybe even likely?) that someone could have detectable antibodies (depending on the sensitivity of the test) for COVID-19, but not be effectively immunized against future challenge. Without further clarification, use of quickie antibody 'snap' kits that give a binary "Yes/No" answer about the presence of antibodies to COVID-19 provides, at best an unfounded confidence in their value. At worst, it's a serious misuse of manpower, scarce assets and public confidence. Like many other diagnostic tools-making the test isn't the hard part. It's getting the right test into the hands of the right people, who can then use it to make the right decisions from the results gleaned. Using the wrong test to measure the wrong thing at the wrong time can really only lead to wrong decisions.
_____________________________
|