Irfan
Computer Vision Engineer

hover over the robot to execute about_irfan()

irfan — portfolio

Welcome to Irfan's Portfolio

~/portfolio ls

Choose your theme

To change this later, use the toggle above

~/portfolio

01 — About

Who I am.

I like understanding what goes on under the hood. The maths behind the models, why certain architectures work, and how to make them actually useful in the real world.

Education

MSc Artificial Intelligence

The University of Edinburgh

BTech Electrical & Electronics

VIT Vellore, India

Focus

Deep Learning

Computer Vision · Generative AI · AI Agents

Location

Edinburgh, Scotland, UK

02 — Experience

Where I've made
an impact.

2025–Present

Cyberhawk

Computer Vision Engineer

Cyberhawk™

Edinburgh, Scotland, UKAug 2025 — PresentFull-time · Hybrid

2024

Forest Research

Forest Research (Northern Research Station) · Industry Partnered Research

Edinburgh, UKMar — Aug 2024MSc Dissertation

2021–23

Wipro

SAP BW Consultant

Wipro Limited

Chennai, IndiaJul 2021 — Jun 2023Full-time

03 — Projects

Things I've
engineered.

scroll to explore

    ╔══════════════════════╗
    ║  import torch        ║
    ║  import cv2          ║
    ║                      ║
    ║  model = YOLO()      ║
    ║  tracker = ByteTrack ║
    ║                      ║
    ║  for frame in video: ║
    ║    detections =      ║
    ║      model(frame)    ║
    ║    tracks =          ║
    ║      tracker.update  ║
    ║        (detections)  ║
    ║                      ║
    ║  >> detecting...     ║
    ║  >> tracking...      ║
    ║  >> players: 22      ║
    ║  >> ball: found      ║
    ║  >> FPS: 30          ║
    ╚══════════════════════╝
    [################] 100%
    STATUS: COMPLETE
click to know more

Computer Vision

Football Analysis Pipeline

YOLOv8 · ByteTrack · SigLIP · OpenCV

    ┌──────────────────────┐
    │ $ llama2 --chat      │
    │                      │
    │ Loading LoRA weights  │
    │ ████████████░░ 85%    │
    │                      │
    │ > quantize: 4-bit    │
    │ > adapter: LoRA      │
    │ > rank: 16           │
    │ > alpha: 32          │
    │                      │
    │ USER: Hello?         │
    │ BOT:  Hi there! I'm  │
    │       your AI...     │
    │                      │
    │ tokens/s: 42.7       │
    │ memory:   3.2GB      │
    │ temp:     0.7        │
    │                      │
    │ [READY]              │
    └──────────────────────┘
    connection: active
click to know more

NLP · Fine-tuning

LLM Character Chatbot

LLaMA 2 · LoRA · QLoRA · Gradio

    ┌──────────────────────┐
    │ ATTENTION MATRIX      │
    │                      │
    │ Q·K^T / sqrt(d_k)    │
    │                      │
    │ ░░▓▓░░▒▒░░▓▓░░▒▒    │
    │ ▒▒░░▓▓░░▒▒░░▓▓░░    │
    │ ▓▓▒▒░░▓▓░░▒▒░░▓▓    │
    │ ░░▓▓▒▒░░▓▓░░▒▒░░    │
    │                      │
    │ diag_attn: HIGH      │
    │ factor:    0.15      │
    │ BLEU: +2.12%         │
    │                      │
    │ en -> pt translation  │
    │                      │
    │ >> softmax applied   │
    │ >> heads: 8          │
    │ >> layers: 6         │
    │                      │
    └──────────────────────┘
    opus_books: loaded
click to know more

Research

Non-Self-Referential Attention

Transformers · PyTorch · NLP

    ╔══════════════════════╗
    ║  VAE Architecture     ║
    ║                      ║
    ║  z ~ N(mu, sigma)    ║
    ║                      ║
    ║  ENCODER ──► z ──►   ║
    ║              │       ║
    ║          [LATENT]     ║
    ║              │       ║
    ║  DECODER ◄── z ◄──   ║
    ║                      ║
    ║  loss = -ELBO        ║
    ║       = KL + recon   ║
    ║                      ║
    ║  >> epoch: 50/50     ║
    ║  >> KL:    0.042     ║
    ║  >> recon: 0.013     ║
    ║                      ║
    ║  [sampling...]       ║
    ╚══════════════════════╝
    MNIST: 98.2% acc
click to know more

Deep Learning

VAE with Continuous Bernoulli

PyTorch · Generative Models · MNIST

    ┌──────────────────────┐
    │ RAG PIPELINE          │
    │                      │
    │ PDF ──► chunks ──►   │
    │         embed  ──►   │
    │         FAISS  ──►   │
    │         query  ──►   │
    │         Gemma  ──►   │
    │         answer       │
    │                      │
    │ chunks:  2,847       │
    │ dim:     768         │
    │ top_k:   5           │
    │                      │
    │ Q: "What is..."      │
    │ A: "Based on Ch.3,   │
    │     the concept..."  │
    │                      │
    │ >> retrieval: 12ms   │
    │ >> generation: 340ms │
    └──────────────────────┘
    Gemma-7B: ready
click to know more

LLMs · Retrieval

RAG Textbook Search

Gemma-7B · Embeddings · FAISS

    ╔══════════════════════╗
    ║ SPMLL TRAINING        ║
    ║                      ║
    ║ labels_per_sample: 1  ║
    ║ true_labels: many     ║
    ║                      ║
    ║ UPL Loss:             ║
    ║  L = -w·log(p)       ║
    ║    - (1-y)·log(1-p)  ║
    ║                      ║
    ║ vs BCE:               ║
    ║ ████████████████ +72% ║
    ║                      ║
    ║ epoch  loss    acc    ║
    ║ 001    0.693  52.1%  ║
    ║ 050    0.312  78.4%  ║
    ║ 100    0.187  89.7%  ║
    ║                      ║
    ║ [CONVERGED]           ║
    ╚══════════════════════╝
    annotation: minimal
click to know more

Machine Learning

Single-Positive Multi-Label Learning

PyTorch · scikit-learn · UPL Loss

    ┌──────────────────────┐
    │ ROBUSTNESS TEST       │
    │                      │
    │ perturbations:        │
    │  ├─ gaussian_noise   │
    │  ├─ blur             │
    │  ├─ contrast         │
    │  ├─ occlusion        │
    │  └─ salt_pepper      │
    │                      │
    │ MODEL     CLEAN NOISY │
    │ SVM       82.1  41.3 │
    │ RF        79.4  38.7 │
    │ AlexNet   91.2  76.8 │
    │                      │
    │ >> DL wins           │
    │ >> delta: +35.5%     │
    │                      │
    │ categories: 15       │
    │ images:    9,247     │
    └──────────────────────┘
    evaluation: done
click to know more

Research

ML vs DL Robustness

AlexNet · SVM · Random Forest

04 — Skills

The tools I
command.

Programming Languages, Libraries & MLOps

PythonPyTorchNumPyPandasscikit-learnSQLMongoDBLlamaIndexLangChainLangGraphTransformers (Hugging Face)UltralyticsOpenCVRoboFlowAWSDockerGitGitHub ActionsDVCMLflowKubernetesPythonPyTorchNumPyPandasscikit-learnSQLMongoDBLlamaIndexLangChainLangGraphTransformers (Hugging Face)UltralyticsOpenCVRoboFlowAWSDockerGitGitHub ActionsDVCMLflowKubernetes

Machine Learning

Deep Learning Architectures (Transformers, CNNs, RNNs, VAEs)Bayesian ModellingExact InferenceApproximate InferenceComputer VisionNLPLLMsLLM Fine-Tuning (LoRA)RAGLLM CompressionDeep Learning Architectures (Transformers, CNNs, RNNs, VAEs)Bayesian ModellingExact InferenceApproximate InferenceComputer VisionNLPLLMsLLM Fine-Tuning (LoRA)RAGLLM Compression

05 — Education

The building
blocks.

2023–24

University of Edinburgh

MSc Artificial Intelligence

Merit

The University of Edinburgh · Edinburgh, UK

2017–21

VIT

BTech Electrical & Electronics

9.28/10

Vellore Institute of Technology · Vellore, India

06 — Contact

Get in touch.

Always happy to connect, whether it's about AI, an interesting problem, or just to say hello.