0% found this document useful (0 votes)
7 views3 pages

Adewole Ben Oyediran - CV

Uploaded by

Farhan Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views3 pages

Adewole Ben Oyediran - CV

Uploaded by

Farhan Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Adewole Benjamin Oyediran

Residence permit: RS0435342 Work permit: Polish Date of birth: 29/09/1993 Phone number: (+48) 666534205 (Home)

Email address: bensha2019@[Link] Website: [Link] Website:

[Link]/in/adewole-oyediran Address: 00-023, Warsaw, Poland (Work)

ABOUT ME

Cloud Data Engineer and Data Architect with 10+ years of experience across the full data lifecycle, progressing from Data Collection
and Customer Insight into Data Analytics and now advanced cloud data engineering. Founder of Bensha Group, where I deliver
cloud data engineering and architecture services to clients. Skilled in designing and optimizing scalable data pipelines in GCP
(primary) and Azure using Databricks (Spark / Delta Lake), Snowflake, Python, SQL and Power BI. Experienced collaborating
with stakeholders to translate business requirements into reliable, governed, high-performance data solutions that support
operational goals and informed decision-making. Open to full-time employment contracts, hybrid B2B engagement, or long-term
project delivery focused on building practical, scalable cloud data systems that create measurable business value.

EDUCATION AND TRAINING

03/2021 – 07/2025 Lublin, Poland


BSC - BUSINESS MANAGEMENT AND ADMINISTRATION The University of Economics and Innovation, Lublin

Website [Link] Level in EQF EQF level 7

SKILLS

Technical Skills

Data Architecture Azure Databricks Power BI SQL GitHub Actions (CI/CD) Tableau Data Modelling Data Quality
Management Azure DevOps Data Compliance Management Snowflake CPG Industry Experience Data Cataloging
GCP Python GitHub Azure Data Factory dbt Agile (Scrum/Kanban) Spark Based Data Engineering

Functional / Leadership / Consulting Skills

Consulting & Presentation Capabilities Cross Team Collaboration (Data Science / Platform / Integration) Business Judgment
Requirements Intake & Translation Data as a Product Implementation Cost Efficiency Decisioning Design of Reusable &
Enriched Data Models Steering Data / BI Engineering Teams Alignment to Business Priorities Estimation & Estimation
Challenge Intercultural Awareness Analytical Thinking Continuous Best Practice Improvement Problem Solving
Interpersonal Communication Delivery Quality Assurance

WORK EXPERIENCE

 BENSHA GROUP -- ( REMOTE | B2B CONTRACT DELIVERY) – WARSAW


FOUNDER / DATA ENGINEER / DATA ARCHITECT / DATA CONSULTANT – 2023 – CURRENT

• Deliver cloud data engineering and architecture services to B2B clients aligned to real operational outcomes.
• Designed and deployed scalable data pipelines and architectures across GCP, Azure, Databricks and Snowflake improving
accessibility and reducing processing time up to 40%.
• Automated ETL/ELT workflows using Python and Spark to increase reliability and accelerate delivery timelines.
• Built analytics and reporting solutions using SQL and Power BI to support confident decision-making.
• Provided strategic consulting to streamline operations, optimize cost efficiency and improve performance.
• Led end-to-end cloud delivery engagements ensuring accuracy, governance and sustainable value.
• Clients available upon request.

 BENSHA GROUP -- (SELF - EMPLOYED | B2B) – WARSAW


DATA ANALYST – 08/2021 – CURRENT

• Translated business questions into structured analytical requirements supporting confident, data-driven decisions.
• Developed dashboards and automated reporting using SQL, Power BI and Python to simplify complex information into
meaningful insights.
• Defined KPIs, measured trends and surfaced opportunities to improve operational performance.
• Validated data from multiple sources to ensure accuracy, consistency and alignment with business logic.
• Collaborated with customer-facing teams to convert customer insights into actionable outcomes.
• Documented data definitions and analytical processes to support transparency and scalability.

 ARESTONE TYRES ZIMBABWE LTD – HARARE, ZIMBABWE


CUSTOMER INSIGHT ANALYST – 07/2020 – 02/2021

• Streamlined administrative processes by implementing efficient procedures and documentation systems, reducing processing
time by 30% and improving record accuracy by 25%.
• Boosted staff morale with regular performance feedback, leading to a 20% increase in job satisfaction and a 15%
improvement in team productivity.
• Demonstrated operational expertise in Microsoft Office programs, utilizing Excel for data analysis, PowerPoint for impactful
presentations, and Outlook for efficient communication, reducing email response time by 40%.
• Enhanced office productivity by optimizing workflows and managing daily schedules, improving task completion rates by 35%
and reducing scheduling conflicts by 50%.

 PRACTICAL SAMPLING INTERNATIONAL - PSI – IKEJA, NIGERIA


DATA COLLECTION & FIELD RESEARCH – 11/2015 – 08/2019

• Conducted field data collection across multiple locations, gathering customer feedback and product insights for various client
companies.
• Engaged directly with customers to understand their perceptions, satisfaction levels, and suggestions for product
improvement.
• Utilized tools such as SurveyMonkey and digital data-capture platforms to record responses accurately and securely.
• Ensured data integrity and consistency by verifying submissions, cleaning datasets, and maintaining confidentiality standards.
• Coordinated and guided team members during field assignments, ensuring coverage, professionalism, and adherence to
data-collection protocols.
• Trained new team members on data-entry procedures, customer interaction techniques, and quality-assurance best practices
to maintain high data reliability.

PROJECTS

End-to-End Microsoft Fabric Lakehouse (Medallion Architecture)

Project Overview:
Designed and implemented an end-to-end Medallion Architecture (Bronze -> Silver -> Gold) data pipeline within Microsoft
Fabric to automate data ingestion, transformation, and enrichment from external sources (Kaggle datasets) into business-ready
layers.

Key Responsibilities:
• Developed Fabric Pipelines to orchestrate the flow of data across Landing, Bronze, Silver, and Gold layers.
• Engineered parameterized, reusable notebooks to control data processing stages and enable flexible automation.
• Defined data schemas and validation logic to ensure consistency and data quality across ingestion points.
• Built Delta Lake tables for optimized storage, versioning, and efficient incremental updates.
• Implemented defensive programming techniques to handle schema drift, empty datasets, and runtime failures.
• Created business-driven transformations aligning analytical outputs with organizational KPIs.
• Integrated Power BI dashboards for real-time insights into pipeline performance and data freshness.
• Logged and monitored pipeline execution using Fabric’s monitoring and logging features for observability.
Tech Stack: Microsoft Fabric, Lakehouse, PySpark, Delta Tables, Power BI, OneLake, Fabric Pipeline

Link [Link]

USGov Earthquake Streaming Analytics Pipeline

Project Overview:
Designed and implemented a real-time data streaming and analytics pipeline to capture, process, and visualize live earthquake
data from the U.S. Geological Survey (USGS) feed using Google Cloud Platform (GCP) and Confluent Kafka.

Key Achievements:
• Built an end-to-end streaming architecture:
• USGS --> Kafka (Confluent Cloud) --> BigQuery --> dbt --> Airflow --> Looker Studio.
• Developed a Python Kafka Producer to continuously ingest GeoJSON data from the USGS feed and publish it to a
Confluent Kafka topic using Avro serialization and Schema Registry for schema governance.
• Engineered a Kafka Consumer to stream data into Google BigQuery in near real time, ensuring low-latency analytics
availability.
• Modeled and transformed data with dbt, creating analytical layers for regional, temporal, and magnitude-based
insights.
• Automated the entire pipeline with Apache Airflow, including ingestion scheduling, transformation runs, and data
validation tasks.
• Delivered a Looker Studio dashboard visualizing live seismic activity (heat maps, magnitude distributions, and
regional trendlines).
• Achieved end-to-end data freshness under 2 minutes, enabling near real-time earthquake monitoring.
Business Value / Real-World Relevance:
• Demonstrates how emergency management and environmental monitoring organizations can build real-time
situational awareness systems.
• Provides a template for streaming architectures used in logistics, IoT, and financial data processing domains.
Tech Stack: Python, Confluent Kafka, Avro, Schema Registry, BigQuery, Airflow, dbt, Looker Studio, GCP

Link [Link]

NYC Taxi Analytics Pipeline - Python · Snowflake · dbt · Power BI

Project Overview:
Architected and deployed a modern, end-to-end analytics pipeline for NYC Taxi data using Python, Snowflake, and dbt to
deliver executive-grade Power BI insights. This solution automated data ingestion, modeling, testing, and reporting creating a
robust, scalable foundation for mobility analytics and operational intelligence.

Key Responsibilities:
• Data Ingestion: Built parameterized Python ingestion scripts to fetch monthly Yellow and Green taxi trip data, convert
to Parquet, and load into Snowflake via internal stages and COPY operations.
• Data Modeling: Developed a dbt-based star schema, including dimensions (vendor, payment, rate, trip type,
borough/zone) and incremental fact tables for continuous data updates.
• Data Quality & Governance: Enforced schema integrity with dbt schema tests, custom constraints, and business
rule validation (e.g., duration limits, fare reconciliation, non-negative totals).
• Geospatial Enrichment: Integrated Wikidata SPARQL coordinates to enrich borough and zone-level analyses with
centroid and distance metrics for geospatial dashboards.
• BI & Visualization: Designed Power BI dashboards presenting multi-layered insights from executive summaries to
geospatial and operational performance pages.
Tech Stack: Python, Snowflake, dbt, Power BI, SQL, PyArrow, pandas

Link [Link]

CERTIFICATIONS

Data Camp
Professional Data Analyst

Udemy
Hadoop: Big Data

Udemy
Dbt (Data Build Tool)

Epam Systems
Modern Sap Development Program

IBM
IBM Business Analysis

Knime
Knime Analytics Platform

Github
Github

Usercentrics
Data Control - Server-Side Tagging

Linkedln
Apache Spark: Big Data Engineering

Sap
Sap Field Glass

LANGUAGE SKILLS

Mother tongue(s): ENGLISH


Other language(s):

UNDERSTANDING SPEAKING WRITING

Listening Reading Spoken production Spoken interaction

GERMAN A1 A1 A1 A1 A1

Levels: A1 and A2: Basic user; B1 and B2: Independent user; C1 and C2: Proficient user

You might also like