BECOME A JUDGE

Hackathon Judge

WATEF HACKATHON STRUCTURE, TECHNICAL DEPTH, AND SCOPE

The West Africa Tech Excellence Forum (WATEF) was established to recognise and amplify exceptional achievements in technology and software engineering across the West African region. WATEF is committed to elevating regional technical excellence, advancing professional standards, and reinforcing West Africa’s position in global technology ecosystems.

The WATEF Hackathon is a formally governed, enterprise-focused competition embedded within this institutional mandate. Its purpose is to drive solution development that aligns with real-world enterprise constraints, regional technology needs, and global technical standards. The hackathon framework requires participants to address defined operational problems with engineered solutions that demonstrate readiness for deployment in complex organisational environments, including regulated sectors and mission-critical systems.

This competition is not a conceptual or exploratory challenge for early-stage experimentation. Instead, it serves as a structured evaluation of solutions that meet rigorous criteria for operational clarity, lifecycle sustainment, and enterprise relevance. Submissions are expected to adhere to sound engineering principles and align with established organisational governance models.

hackathon judges

2. Technical Domains and Solution Scope

The WATEF Hackathon focuses on technical domains that reflect high-impact enterprise and infrastructure challenges relevant to West Africa while maintaining alignment with global engineering standards.

1 Core Technical Domains
2 Scope Boundaries

Solutions may be submitted within the following domains:

  • Enterprise software systems and mission-critical digital platforms

  • Cloud-native, hybrid, and on-premise infrastructure architectures

  • Cybersecurity, digital risk management, and secure system design

  • Data platforms, analytics systems, and information governance frameworks

  • Interoperable systems supporting multi-agency, cross-platform, or cross-border operations

 

All solutions must be designed for production environments. Submissions must account for:

  • Integration with existing systems or external services

  • Operational constraints such as uptime, fault tolerance, and monitoring

  • Regulatory, data protection, and security considerations

  • Long-term maintainability and controlled system evolution

Judging selection Criteria

Judging appointments are extended to professionals with demonstrated responsibility for system design, technical review, security oversight, or infrastructure governance. Appointed judges participate in structured evaluation processes consistent with enterprise and public-sector technology review practices.

3. Participation Standards and Submission Expectations

Participation in the WATEF Hackathon is limited to teams and individuals with verifiable technical credibility and enterprise-capable experience. Eligibility is not defined by academic status alone. Professional competency in systems engineering, software architecture, infrastructure implementation, or comparable domains is required.

1 Eligibility Standards
2 Mandatory Submission Requirements

Participation in the WATEF Hackathon is limited to individuals or teams demonstrating credible technical and professional capability. Eligibility is assessed based on the ability to design, document, and defend enterprise-grade systems.

The hackathon is not designed for introductory learning or skill acquisition.

Each submission must include structured documentation addressing the following areas:

  • System Architecture: Logical and physical architecture diagrams with component descriptions

  • Data and Integration: Data flows, interfaces, APIs, and integration points

  • Security Model: Threat assumptions, access controls, and risk mitigation strategies

  • Implementation Strategy: Deployment assumptions, dependencies, and environmental constraints

  • Operational Readiness: Scalability, reliability, monitoring, and support considerations

Submissions lacking documentation depth or operational clarity will not proceed to advanced evaluation stages.

4. Advanced Judging Responsibilities and Technical Evaluation Framework

Judging at the WATEF Hackathon is conducted by a panel of senior technical experts selected for domain-relevant expertise. The evaluation framework is structured, evidence-driven, and aligned with enterprise best practices.

Judges are appointed not only to score submissions, but to exercise senior-level technical judgment comparable to enterprise architecture review boards, technology risk committees, and regulated system approval panels.

The WATEF Hackathon employs an expert-led, consensus-driven judging model. Judges apply structured technical assessment rather than individual preference or subjective scoring.

Evaluation mirrors enterprise solution review processes used in regulated and large-scale operational environments.

Judges assess submissions based on:

  • Architectural soundness and engineering correctness

  • Alignment with enterprise governance and regulatory expectations

  • Deployment feasibility and operational readiness

  • Security posture and risk management discipline

  • Documentation clarity and technical decision traceability

Judging outcomes are derived through structured deliberation and consolidated scoring.

In addition to standard evaluation duties, WATEF judges are responsible for the following advanced functions:

  • Interpreting technical decisions within real organisational and regulatory contexts

  • Assessing the downstream operational consequences of architectural choices

  • Identifying hidden risk, fragility, or governance gaps not explicitly stated by participants

  • Distinguishing between technically plausible designs and operationally defensible systems

Judges must evaluate submissions as if approving them for controlled deployment, not demonstration or experimentation.

Judges are expected to challenge undocumented assumptions, incomplete threat models, and architectural shortcuts, even when solutions appear functional at a surface level.

 

The WATEF evaluation framework is structured around enterprise system viability rather than feature completeness or innovation claims.

Judges must apply the following evaluation lenses concurrently:

System Integrity
Assessment of whether the system forms a coherent whole, with clear boundaries, responsibilities, and interaction patterns.

Operational Reality
Evaluation of how the system behaves under load, failure conditions, scaling events, and maintenance cycles.

Governance Readiness
Assessment of whether the system can be governed, audited, secured, and evolved within an institutional environment.

Risk Awareness
Identification of technical, operational, security, and compliance risks, including those not directly addressed by participants.

Judges should document where solutions rely on future work, undefined controls, or external assumptions that materially affect viability.

 

WATEF judging outcomes are determined through structured consensus rather than individual authority.

Judges are required to:

  • Present assessments using technical evidence

  • Engage constructively in peer deliberation

  • Adjust positions when presented with substantiated counter-analysis

Where disagreement persists, emphasis should be placed on institutional defensibility, not personal interpretation.

Final recommendations must reflect what a responsible enterprise or public-sector body could reasonably approve for deployment.

5. Judge Selection Criteria

This section defines the standards governing the appointment of judges to the WATEF Hackathon and the structured timeline under which all evaluations are conducted. Both elements are designed to ensure technical credibility, consistency of assessment, and institutional defensibility of outcomes

Judges must possess verifiable experience in one or more of the hackathon’s defined technical tracks. This includes hands-on responsibility for system design, architecture, security, data platforms, infrastructure, or comparable enterprise technology domains.

Preference is given to individuals who have operated within enterprise, infrastructure-scale, or regulated environments. This ensures judges can assess solutions within realistic organisational, compliance, and operational constraints.

Judges must have prior exposure to formal system evaluation, architecture review, technical audit, or governance oversight processes. This may include participation in internal review boards, technology risk committees, or structured solution approval processes.

Judges are expected to have held roles where technical decisions carried measurable operational or organisational consequences. This criterion ensures evaluators understand the implications of approving or rejecting system designs.

Each judge is assigned only to tracks that directly align with their professional expertise. Cross-track evaluation is not permitted without formal reassignment to preserve assessment integrity.

 

Judges must be free of conflicts of interest related to participating teams or solutions. Any potential conflict must be disclosed prior to appointment or reassignment.

During this phase, judges receive access to all submitted documentation. Responsibilities include reviewing architecture designs, identifying technical gaps, assessing risk assumptions, and preparing structured evaluation notes. No final scoring occurs at this stage.

Live evaluation sessions are conducted to validate documentation, clarify design decisions, and test technical reasoning. Judges assess alignment between written submissions and verbal explanations, focusing on architectural coherence, operational assumptions, and risk awareness.

Following live evaluations, judges participate in structured deliberation sessions. Individual assessments are discussed, challenged, and consolidated into panel-level conclusions. Emphasis is placed on evidence-based reasoning and institutional defensibility.

Scores are finalised using the approved scoring rubric. Panel recommendations are submitted to the WATEF Hackathon Committee, which holds final authority over outcome ratification.

Judges may be required to provide brief technical rationales supporting final decisions. These records support internal governance, auditability, and future process review.

6. Evaluation Timeline

The WATEF Hackathon evaluation process follows a defined, multi-phase timeline designed to ensure thorough technical review and consistent application of standards.

 

Meet Our Past
JUDGES

This distinguished panel of industry leaders, innovators, and thought pioneers is committed to recognizing excellence and driving the future of African technology and innovation.

Engr. Tunde Osei

AI/ML & Cloud Systems | Judge 2023-2024

Meet the Judge
Dr. Amina Bello

Dr. Amina Bello

Cybersecurity & Digital Policy | Judge 2022-2024

Meet the Judge

Mr. Adewale Bakare

Blockchain & Web3 Development |Judge 2023-2024

Meet the Judge

Mrs. Rose Kanu

HealthTech & Digital Health Innovation | Judge 2023-2024

Meet the Judge

Ms. Priscilla Samuel Nwachukwu

Fintech & Digital Inclusion | Judge 2024-2025

Meet the Judge

Engr. Joshua Umejuru

Drilling & Well Engineer | Judge 2023-2024

Meet the Judge

Ms. Adaobu Amini-Philips

Finance and procurement executive

Meet the Judge

Ms. Naomi Chukwurah

program and project leader

Meet the Judge

Dr. Leesi Saturday Komi

Telehealth and Public Health Innovation in Africa | Judge 2022-2023

Meet the Judge

Mr. Babalola Ayodele

communications and public affairs professional | Judge 2024-2025

Meet the Judge
Joshua Temiloluwa

Mr. Joshua Temiloluwa

Biologist and environmental researcher | Judge 2024-2025

Meet the Judge

Ms.Okeoghene Elebe

Biologist and environmental researcher | Judge 2024-2025

Meet the Judge

Mr. Semiu Temidayo Fasasi

Environmental and Mechanical Engineer

Meet the Judge

Mr. Bryan Anoruo

motion and product designer

Meet the Judge