PeopleBooks

Evidence & Evaluation Definitions

PeopleBooks evaluates execution across three structural layers. This page defines what is measured, how it is scored, and what verdicts mean.

Purpose of This Document

This document defines the PeopleBooks evidence system and scoring methodology. It establishes what constitutes valid structural evidence, how evidence is tagged and qualified, and what scoring thresholds mean in practice.

PeopleBooks measures execution reliability across three structural layers: Work Design, Team Reality & Gaps, and Alignment & Performance. This is not a cultural assessment, performance review, or hiring diagnostic. It is a structural audit.

How to Use This Document

Use this reference when completing evaluator interviews to understand what evidence is structurally meaningful versus noise. Each evaluator requires specific evidence types tagged with structured markers.

Evidence tags such as [Priority], [Org], [Team], and [Allocation] anchor responses to observable structural elements rather than opinions or intentions.

Scoring is algorithmic. Higher scores require stronger structural evidence. Weak or absent evidence caps scores regardless of narrative quality.

Evidence Qualification Ladder

Designed Execution
Mechanism + Authority defined
Authority & Mechanism Defined
Who decides + How it operates
Observable Structure
Documented roles and processes
Declared Intent
Stated goals without structure
Assumption
No structural evidence

PeopleBooks scoring strength increases as evidence moves upward on this ladder.

Core Evidence Concepts (All Stages)

Evidence Strength Levels

All evaluators distinguish between structural evidence (observable, documented, assigned) and assumptions (inferred, planned, implicit).

Evidence LevelDescriptionScoring Impact
Strong EvidenceSpecific role, responsibility, decision owner, and mechanism clearly definedSupports high score
Moderate EvidencePartial clarity but missing decision rights or boundariesCaps scoring potential
Weak EvidenceGeneral statements or inferred structureLimits scoring
Absent EvidenceNo description providedForces [Assumption] and reduces score

Observable vs. Inferred

Observable evidence can be pointed to: "This person owns this decision. This document defines this process. This meeting cadence governs alignment."

Inferred evidence is structural guesswork: "We think the team understands priorities. The founder probably handles escalations. Roles seem clear enough."

Documented vs. Verbal

Documented structure (org charts, role definitions, decision matrices, operating rhythms) is structurally stronger than verbal confirmation. If execution depends on memory and repetition, it is not designed.

Evidence Tag System

Work Design Evidence Tags

[Priority]

Strategic work classification

[Org]

Role and authority structure

[Books]

Documented operating system

[Assumption]

Structural design hypothesis

Team Reality & Alignment Evidence Tags

[Team]

Current team configuration

[Allocation]

Capacity distribution and load

[Observation]

Observed execution patterns

[WorkDesign]

Reference to design layer

[Assumption]

Capacity or alignment hypothesis

What PeopleBooks Measures

01

Work Design

Is execution deliberately designed?

02

Team Reality

Can the current team operate the design?

03

Alignment & Performance

Does execution remain cohesive under pressure?

PeopleBooks does not evaluate culture, performance quality, or hiring decisions. It measures structural execution reliability.

Common Misinterpretations (Global)

❌ "We have great people, so execution is fine."

Talent quality is not structural design. High performers operating in ambiguous systems still produce execution risk.

❌ "The founder knows how everything works."

Founder knowledge is not transferable structure. If execution depends on one person's memory, it is not designed.

❌ "We communicate well, so alignment is strong."

Communication frequency is not alignment. Alignment requires shared priorities, explicit decision rights, and self-regulating mechanisms.

❌ "We're planning to document everything soon."

Intent is not evidence. Until structure is observable and operational, it is an assumption.

Work Design Scoring Framework

Work Design Score: 100 Points
A. Role Clarity & Boundary Definition0–25
B. Decision Ownership & Authority0–25
C. Strategy Coverage & Integration0–25
D. Founder Dependency Risk0–25
80–100Designed execution system
60–79Scaling friction likely
40–59Founder-dependent execution
0–39Structurally unsafe execution
PASSCONDITIONALFAIL

Team Reality Scoring Framework

Team Reality Score: 100 Points
A. Role Coverage0–25
B. Capacity Realism0–25
C. Dependency Concentration0–25
D. Founder Operational Dependency0–25
80–100Team can execute independently
60–79Coverage gaps emerging
40–59Capacity structurally insufficient
0–39Team cannot execute design
TEAM REALITY ASSESSMENT COMPLETE

Alignment & Performance Framework

Alignment & Performance Score: 100 Points
A. Role Clarity in Practice0–25
B. Goal Alignment & Prioritisation0–25
C. Operating Rhythm & Cadence0–25
D. Execution Under Pressure0–25
Primary Classification
Self-regulating
Coordination-heavy
Escalation-dependent
80–100Execution holds under pressure
60–79Coordination friction visible
40–59Execution misaligned
0–39System requires constant correction
ALIGNMENT & PERFORMANCE ASSESSMENT COMPLETE

Measurement Philosophy

Work Design Score answers: "Is the system designed?"

Team Reality Score answers: "Can the current people run it?"

Alignment & Performance Score answers: "Will it hold under pressure?"

If any layer fails, scale is structurally unsafe.

Closing Principle

If structure is undefined: Risk is undefined.

If ownership is implicit: Scale is unstable.

If decisions escalate routinely: The system is not self-regulating.