Cleaning Vendor Performance Review:
90-Day KPIs That Actually Mean Something
Most cleaning contracts include language about performance standards. Almost none define what those standards are, how they are measured, or what happens when they are not met. This is how you fix that.
A cleaning vendor review at 90 days should score six KPIs against defined thresholds. Below 95% on service completion is a warning. Below 90% is a contract review.
The Framework
A 90-day cleaning vendor performance review should score six KPIs against defined numerical thresholds: service completion rate, inspection pass rate, complaint response time, staff consistency, supply accuracy, and escalation quality. Below 95% on service completion is a warning. Below 90% is a contract review conversation. Vague language like "performance standards will be maintained" in a contract is not a standard. It is a handshake with no definition. The review process described here gives you the definition. See also our piece on how to read a cleaning completion report for the underlying data you will need to score these KPIs accurately.
The right window for a first formal vendor performance review. Long enough to see real patterns. Short enough to correct problems before they compound into a contract dispute.
Most facility managers who raise concerns about their cleaning vendor have been unhappy for six months. The data to justify the conversation has been there for four of them.
Why 90 Days Is the Right Window
The first 30 days of a new cleaning contract are noise. The vendor is learning your facility, your access procedures, your escalation contacts, and your specific scope requirements. Technicians are still mapping zones. Supervisors are still calibrating staff assignments. You will see more exceptions in the first 30 days than you will in month four, and that is normal.
Days 30 through 60 are the stabilization window. The program should be finding its rhythm. Major operational gaps become apparent here. If you have a zone that is consistently incomplete, a restroom that is routinely undersupplied, or a shift pattern that does not match your actual operational hours, you will see it in this window.
Day 90 is when you have enough data to score the program honestly. You have at least two full monthly cycles. You have baseline data on every KPI. You have documented exceptions and their resolutions. The 90-day review is not an adversarial meeting. It is a data review that tells both parties where the program stands against the agreed standard.
The vendors who resist a scored 90-day review are the ones who know their numbers are not good. A vendor with a strong program will welcome the review because the data defends them.
KPI 1: Service Completion Rate
Service completion rate is the percentage of scheduled zones completed per shift, measured at the zone level with verified timestamps. This is the single most important KPI in a cleaning vendor review, and it is the most commonly falsified metric when vendors self-report.
A paper checklist that says all zones complete is not a completion rate. A completion rate comes from GPS-verified zone entry and exit data, with dwell times measured against expected durations. Without that data, you are scoring your vendor's honesty, not their performance.
| Score Range | Status | Required Action |
|---|---|---|
| 97% - 100% | Excellent | Document and use as baseline for contract renewal discussion |
| 95% - 96% | Acceptable | No action required. Monitor for downward trend. |
| 90% - 94% | Warning | Formal notification to vendor. Written corrective action plan within 5 business days. |
| Below 90% | Contract Review | Senior leadership meeting. Credit request for undelivered service. 30-day improvement timeline with defined milestones. |
One nuance that matters: completion rate should be scored separately for high-priority zones and standard zones. A 97% overall completion rate that includes two consecutive misses on the executive floor restrooms is a different problem than a 97% rate with the misses distributed evenly across low-traffic storage areas.
KPI 2: Inspection Pass Rate
Inspection pass rate measures the percentage of zones that meet defined quality standards when evaluated by a supervisor or third-party inspector. The critical word is "defined." A pass rate is only meaningful if pass and fail are specific.
At Millennium, we use a point-based inspection system where each zone is scored on a checklist of specific items: floor condition, waste containers, surface cleanliness, dispenser stock levels, fixture condition, and odor control. A zone passes if it scores above a defined threshold on the inspection form. The form is the same for every inspector, every facility, every shift. That consistency is what makes the data comparable over time.
According to ISSA's cleaning industry benchmarks, a well-managed commercial cleaning program should achieve inspection pass rates above 85% when measured against objective criteria (ISSA, "Cleaning Industry Management Standard," 2023). We set our internal threshold at 90%, which provides a 5-point buffer between our standard and the industry floor.
The inspection data that feeds this KPI is exactly what we cover in our article on digital inspections versus paper checklists. If your vendor is running paper inspections, the pass rate data is only as reliable as the person filling out the form.
What a real inspection scores
Floor condition (dry, debris-free, no streaking), waste containers emptied and relined, horizontal surfaces dusted and wiped, dispenser stock levels, fixture condition including mirrors and hardware, odor control, and any zone-specific requirements from the scope of work.
What makes a pass rate unreliable
Inspections conducted by the same supervisor who manages the cleaning crew. Inspections with no photographic evidence of deficiencies. Scoring scales where everything above a baseline is 'satisfactory.' Inspections that happen only during day shifts when the cleaning crew works overnight.
Pass rate by zone type
Score pass rates separately for restrooms, open-area floor zones, high-touch surface zones, and food service areas. An aggregate pass rate that bundles your conference rooms with your production floor hides the problem zones.
Trend is more important than point-in-time
A 91% pass rate in month 1, declining to 88% in month 2 and 85% in month 3, is a more serious situation than an 87% rate that has been improving for three consecutive periods. Score the trend line, not just the number.
KPI 3: Complaint Response Time
Complaint response time is one of the most differentiating KPIs in a vendor review and one of the least formally tracked. Most facility managers know whether their vendor responds quickly. Almost none have documented data on it.
The metric should be broken into three components: time to acknowledgment, time to initial assessment, and time to physical remediation. These are different things and should be tracked separately.
Acknowledgment: within 2 hours during business hours
A human response confirming the complaint was received and is being reviewed. Not an auto-reply. A named person from the vendor organization confirming ownership of the issue. Complaints submitted outside business hours should be acknowledged within one hour of the next business day opening.
Assessment: within 4 hours of acknowledgment
The vendor's account manager or supervisor reviews the complaint, confirms the facts, and provides a preliminary plan. This is not a resolution. It is a commitment: here is what happened and here is what we are going to do about it. If the complaint requires investigation before the plan is confirmed, the client should be told that with a specific timeline.
Remediation: within 24 hours for standard issues; same day for health or safety
Physical correction of the issue. A missed zone gets re-serviced. An empty dispenser gets restocked. A damaged surface gets documented and escalated to the appropriate repair channel. Remediation is complete when the issue is resolved, not when the vendor says it will be resolved.
To score this KPI at 90 days, you need a log of every complaint submitted during the review period with timestamps for acknowledgment, assessment, and remediation on each. Every complaint should be submitted in writing, even if it begins verbally. That written record is the data for the KPI.
KPI 4: Staff Consistency
Staff consistency is the percentage of scheduled shifts serviced by your regular assigned technicians. It is a KPI that most facility managers do not track and most vendors would prefer they did not.
The reason it matters is simple. A technician who has worked your facility for three months knows things a substitute does not. They know which floor drain runs slow and needs extra attention. They know that the third-floor break room microwave gets heavy use on Wednesday nights. They know the loading dock doors are locked before midnight and need a different entry route. That institutional knowledge is part of what you are paying for, and it walks out the door every time your assigned team is replaced by whoever the vendor had available.
Industry data from the Building Service Contractors Association International (BSCAI) indicates that average annual turnover in the commercial cleaning sector runs between 200% and 400% (BSCAI Workforce Report, 2022). That means a vendor's workforce can turn over two to four times in a year. The impact on your facility is direct. Track which technicians are actually servicing your account and how often the faces change.
| Staff Consistency Rate | What It Means | Action |
|---|---|---|
| 85% or higher | Assigned team is largely stable. Normal rotation for scheduled days off. | No action required. |
| 70% - 84% | Substitutes covering roughly one shift in four. Moderate concern. | Ask vendor for assigned team stability data and turnover rate on your account. |
| Below 70% | Your facility is being staffed by a rotating pool. Assigned team is nominal. | Formal discussion required. This is a scope delivery issue, not a staffing footnote. |
KPI 5: Supply Accuracy
Supply accuracy is the percentage of restocking and replenishment tasks completed correctly per inspection cycle. This covers soap dispensers, paper towels, toilet tissue, sanitizer dispensers, trash liners, and any specialty supplies in your scope.
Supply failures are different from cleaning failures and should be tracked separately. A restroom that is cleaned but has an empty soap dispenser is a supply failure. A conference room that is vacuumed but has an empty whiteboard marker tray is a supply failure. These failures are operationally distinct from a missed cleaning zone, and they require different corrective actions. A cleaning failure is a scheduling or execution problem. A supply failure is a restocking protocol problem or an inventory management problem.
The benchmark for supply accuracy is 95% or higher. At that rate, you will have one dispenser in twenty that is not at the correct level when your inspector checks. That is acceptable. What is not acceptable is consistently empty dispensers on high-traffic fixtures, because that means your restocking protocol does not account for actual usage rates. The fix is either a more frequent restocking cycle or a usage-based supply trigger, similar to what we describe in the IoT sensor framework in our piece on IoT sensors in commercial restrooms.
KPI 6: Escalation Quality
Escalation quality is the hardest KPI to score because it is qualitative, but it is worth scoring. It measures how well your vendor handles situations that fall outside the standard scope: a building maintenance issue the cleaning crew discovered, a safety concern in a serviced zone, a policy conflict between the scope and a facility change.
A high-quality escalation looks like this: the technician identifies the issue, documents it with a photo and note in the operational platform, flags it to the supervisor on the same shift, and the supervisor contacts the facility manager before the start of the next business day with the specific issue, the location, and the recommended action.
A low-quality escalation looks like this: the technician notices the issue, does not document it, and the facility manager finds out because a building occupant raised the problem three days later.
Score this KPI by reviewing every escalation that occurred during the 90-day review period. For each one, assess whether it was identified, documented, communicated, and resolved in a timeframe that served the facility. A vendor with strong escalation quality is one that is actually paying attention to your building, not just cleaning it.
The 90-Day Scorecard at a Glance
| KPI | Warning | Contract Review | Excellent |
|---|---|---|---|
| Service Completion Rate | Below 95% | Below 90% | 97%+ |
| Inspection Pass Rate | Below 90% | Below 85% | 95%+ |
| Complaint Acknowledgment | Over 4 hours | Over 8 hours | Under 2 hours |
| Complaint Remediation | Over 48 hours | Over 72 hours | Under 24 hours |
| Staff Consistency | Below 75% | Below 65% | 85%+ |
| Supply Accuracy | Below 92% | Below 85% | 98%+ |
How to Run the Review Meeting
The 90-day review is a data meeting. It should not be a conversation about feelings or impressions. Bring the scored KPIs. Every metric should have a number, a source, and a comparison to threshold.
Invite your account manager and ask that the vendor's operations manager attend. The account manager handles the relationship. The operations manager owns the execution. Both need to be in the room when the data is presented.
Walk through each KPI in order. For each one that meets or exceeds threshold, note it briefly. For each one that falls below warning, present the specific incidents that contributed to the score. Do not frame this as an accusation. Frame it as data that requires explanation and a correction plan.
If the vendor contests the data, ask them what data they have that contradicts it. A vendor with real GPS verification and a real inspection platform can pull their records immediately. A vendor who says "that does not match what our records show" without producing the records is telling you something important about their records.
End the meeting with a written summary of agreed corrective actions, timelines, and the next review date. Send it to the vendor in writing within 24 hours. That document is your follow-up tool and, if the situation escalates, your contract documentation.
When the Review Leads to a Contract Discussion
If your 90-day review shows multiple KPIs below warning threshold, or a single KPI in contract review territory, you are in a different conversation. This is not a corrective action discussion. This is a program performance discussion.
Most commercial cleaning contracts include language about service standards and remedies for non-performance, but the language is often vague. The value of a KPI-based review is that it creates a documented performance record against specific thresholds. If you have three months of data showing consistent below-threshold performance, that record supports a credit request for undelivered service, a renegotiation of scope or price, or in sustained cases, a contract termination discussion.
Before that conversation, make sure you have organized your evidence: the scored KPI data for each period, the specific incidents that drove below-threshold scores, your written complaint log with response and remediation timestamps, and any written corrective action plans the vendor provided. Bring the documentation, not just the summary.
A vendor who has been providing consistently below-threshold service rarely disputes the outcome of a well-documented performance review. The data makes the argument.
Frequently Asked Questions
The first formal review should happen at 90 days. That is long enough to see real patterns and short enough to correct problems before they compound into contract disputes. After the initial 90-day review, quarterly is sufficient for stable accounts. Monthly check-ins between formal reviews are appropriate if the account is new, if there has been a recent service failure, or if you are heading into a high-traffic period. A formal review means scored KPIs against documented thresholds, not an informal conversation with your account manager.
The threshold for a healthy program is 95% or higher on service completion rate, measured at the zone level over a 30-day period. A single week below 95% is a data point. Two consecutive weeks below 95% is a pattern worth discussing with your account manager. Two consecutive periods below 90% means the scope is not being delivered and you have grounds for a formal contract performance conversation. Completion rate without zone-level GPS verification is not a real metric. It is a self-reported number.
Inspection pass rate should be at or above 90% when measured against your defined standards. The definition of pass versus fail matters enormously here. A pass rate measured against vague standards like 'satisfactory' or 'acceptable' is not useful. The standard should be specific: surfaces visually clean, no missed waste containers, dispensers stocked within defined thresholds, floors free of debris. With specific standards, a 90% pass rate is meaningful. Without them, a 95% pass rate means nothing.
Acknowledgment of a complaint should happen within two hours during business hours. Initial assessment and a remediation plan should follow within four hours. Physical remediation should be complete within 24 hours for non-safety issues and within the same business day for issues affecting occupant health or access. Complaint response time is one of the most differentiating KPIs in a vendor review. Most vendors acknowledge quickly and remediate slowly. Track the gap between acknowledgment and completion, not just whether the complaint was acknowledged.
Staff consistency measures the percentage of scheduled shifts serviced by your regular assigned technicians versus substitute or temporary staff. A facility with three assigned technicians should see those three technicians on shift for at least 80% of service nights. Substitutes are expected during vacations and illnesses. What is not acceptable is a pattern where the assigned team is consistently unavailable and your facility is being serviced by whoever is available. Substitutes do not know your facility, your equipment locations, your exception procedures, or your escalation contacts.
Supply accuracy is the percentage of restocking and supply replenishment tasks completed correctly per inspection cycle. This includes soap dispensers filled to threshold, paper products stocked, sanitizer dispensers functional, and any specialty supplies restocked per the scope. An inspection-based score for supply accuracy should be at or above 95%. Track it separately from general cleanliness scores, because supply failures are a different operational problem than cleaning failures and require different corrective actions.
A vendor scoring below threshold on two or more KPIs simultaneously is a systemic problem, not an execution problem. Systemic problems require a root cause meeting with senior vendor leadership, not your account manager. Come to the meeting with the scored KPI data, the specific incidents that contributed to each score, and a written performance improvement timeline with defined milestones. A vendor that responds to a multi-KPI failure with reassurances instead of a documented corrective plan is not a vendor that will improve.
Know your vendor's real score before the contract renews.
MFS accounts include a formal 90-day performance review with scored KPIs, GPS-verified completion data, and digital inspection records. No self-reporting. No form letters. Just the data.