Software Developer with hands-on experience in full-stack development, AI/ML integration, and modern software architecture. Proven ability to build scalable systems using microservices, implement CI/CD pipelines, and leverage AI models for practical applications. Strong foundation in algorithms, data structures, and prompt engineering for LLM-based solutions. Native English and Hebrew speaker with strong analytical thinking and rapid learning ability.
An automated testing and verification framework for "plan builder" products

Mentored by: FineALGs
Data Science Bootcamp 2025 (Data)
Responsibilities:
---
Building tenant management whose role is to provide data on tenants and prevent unauthorized access. Built by CLI tool + FastAPI server
Adding Redis caching to tenant management to allow faster access. Access is done by decorators that refer to a quick check in the cache area in memory before an expensive access to the DB on disk.
Create a monthly scheduler to clean up the archive in the DB. Uses message queues for jobs scheduled into it by apscheduler, rq, and Redis
---
I designed a generic tag system that uses many-to-many relationships for cross-tagging and efficient execution of scenarios.
Creating a FastAPI server for the tagging system to provide this service to all services in the system. Provides easy access to filtering and retrieving information based on tags, as well as security for authorized access only.
---
Established full CI/CD pipelines via GitHub Actions for automated validation.
---
I built a video-processing component in Python that uses the MediaPipe Face Mesh model to detect facial landmarks and extract precise eye and head positions from each frame. I implemented a signal-smoothing mechanism using a five-frame moving window to stabilize rapid fluctuations in the raw landmark data and ensure clean, reliable signals before further analysis.
I added a head-pose estimation module that computes a Rotation Matrix derived from MediaPipe’s 3D facial landmarks. Using OpenCV’s Perspective-n-Point (PnP) solver, I generated the rotation matrix R and extracted Pitch and Yaw angles for real-time analysis of gaze direction. This stage established the foundation for future expansion toward full 3D orientation tracking.
I developed an attention-classification mechanism based on head angles and gaze signals, where I defined dynamic thresholds distinguishing between three attention levels: Focused, Normal, and Distracted. The classifier uses the Gaze Aversion Rate (GA Rate), the Eye-Aspect Ratio (EAR) for blink detection, and off-screen time tracking to produce an overall attention summary at the end of video processing.
I created an automated results-reporting layer in Python that stores per-frame states, aggregate statistics, and temporal patterns of attention. I integrated visualization capabilities using Matplotlib to generate graphs showing changes in attention over time, and designed the structure so it can later connect to an external dashboard or a CI/CD pipeline.
Creating a FastAPI server for the tagging system to provide this service to all services in the system. Provides easy access to filtering and retrieving information based on tags, as well as security for authorized access only.
Established full CI/CD pipelines via GitHub Actions for automated validation. Two tests verified using pytest: 1. Testing the syntax of .JSON files, importing pyproject.toml files. 2. Testing proper communication of all MCP servers and their tuple list using lang chain, and comparing the output to the expected output by gemini-google.
I built a video-processing component in Python that uses the MediaPipe Face Mesh model to detect facial landmarks and extract precise eye and head positions from each frame. I implemented a signal-smoothing mechanism using a five-frame moving window to stabilize rapid fluctuations in the raw landmark data and ensure clean, reliable signals before further analysis.
I added a head-pose estimation module that computes a Rotation Matrix derived from MediaPipe’s 3D facial landmarks. Using OpenCV’s Perspective-n-Point (PnP) solver, I generated the rotation matrix R and extracted Pitch and Yaw angles for real-time analysis of gaze direction. This stage established the foundation for future expansion toward full 3D orientation tracking.
I developed an attention-classification mechanism based on head angles and gaze signals, where I defined dynamic thresholds distinguishing between three attention levels: Focused, Normal, and Distracted. The classifier uses the Gaze Aversion Rate (GA Rate), the Eye-Aspect Ratio (EAR) for blink detection, and off-screen time tracking to produce an overall attention summary at the end of video processing.
I created an automated results-reporting layer in Python that stores per-frame states, aggregate statistics, and temporal patterns of attention. I integrated visualization capabilities using Matplotlib to generate graphs showing changes in attention over time, and designed the structure so it can later connect to an external dashboard or a CI/CD pipeline.

Project Description: A system that generates complete programming questions with solutions and tests for education and automated interviews.
My Contribution:
---
Project Description: Covid-19 patient classifier.
Tasks:
---
Project Description: Real-time attention-monitoring module in Python using MediaPipe and FaceMesh, converting facial landmarks into head-orientation and attention signals.
---
Project Description: Full-stack platform for exchanging skills.
Technologies:
Responsibilities:
---
Project Description: Full-stack shopping and inventory system.
Technologies:
---
Project Description: Interactive puzzle-based escape room game following Jewish history.
Features:
Technologies:
Native