HM Note: This hybrid contract role is five (5) days in office. Candidate resumes must include first and last name, email and telephone contact information.
Description
Responsibilities:
Participate in product teams to analyze systems requirements, architect, design, code and implement cloud-based data and analytics products that conform to standards.
Design, create, and maintain cloud-based data lake and lakehouse structures, automated data pipelines, analytics models, and visualizations (dashboards and reports).
Liaises with cluster IT colleagues to implement products, conduct reviews, resolve operational problems, and support business partners in effective use of cloud-based data and analytics products. Analyses complex technical issues, identifies alternatives and recommends solutions.
Prepare and conduct knowledge transfer
General Skills:
Experience in multiple cloud base data and analytics platforms and coding/programming/scripting tools to create, maintain, support and operate cloud-based data and analytics products.
Experience with designing, creating and maintaining cloud-based data lake and lakehouse structures, automated data pipelines, analytics models, and visualizations (dashboards and reporting) in real world implementations
Experience in assessing client information technology needs and objectives
Experience in problem-solving to resolve complex, multi-component failures
Experience in preparing knowledge transfer documentation and conducting knowledge transfer
A team player with a track record for meeting deadlines
Desirable Skills:
Written and oral communication skills to participate in team meetings, write/edit systems documentation, prepare and present written reports on findings/alternate solutions, develop guidelines / best practices Interpersonal skills to explain and discuss advantages and disadvantages of various approaches
Experience in conducting knowledge transfer sessions and building documentation for technical staff related to architecting, designing, and implementing end to end data and analytics products Technology Stack Azure Storage, Azure Data Lake, Azure Databricks Lakehouse, and Azure Synapse Python, SQL, Azure Databricks and Azure Data Factory Power BI
Skills
Experience and Skill Set Requirements
Experience - 40 %
2-5 years
of professional experience in data science, data analytics, or a related quantitative field (e.g., data engineering, machine learning, or business intelligence) or equivalent.
Proven experience in
data analysis, visualization, and statistical modeling
for real-world business or research problems.
Demonstrated ability to
clean, transform, and manage large datasets
using Python, R, or SQL.
Hands-on experience building and deploying
predictive models or machine learning solutions
in production or business environments.
Experience with
data storytelling
and communicating analytical insights to non-technical stakeholders.
Exposure to
cloud environments
(AWS, Azure, or GCP) and
version control tools
(e.g., Git).
Experience working in
collaborative, cross-functional teams
, ideally within Agile or iterative project structures.
Knowledge of
(feature engineering, missing data handling, outlier detection)
Machine Learning & Statistical Modeling
Proficiency in
supervised and unsupervised learning
techniques (regression, classification, clustering, dimensionality reduction)
Understanding of
model evaluation metrics
and validation techniques (cross-validation, A/B testing, ROC-AUC, confusion matrix)
Basic understanding of
deep learning frameworks
(TensorFlow, PyTorch, or Keras) is a plus
Data Visualization & Reporting
Expertise with
visualization libraries
(matplotlib, seaborn, plotly, or equivalent)
Experience building interactive
dashboards
(Tableau, Power BI, Dash, or Streamlit)
Ability to design
clear, impactful data narratives and reports
Data Infrastructure & Tools
Experience with
cloud-based data services
(e.g., AWS S3, Redshift, Azure Data Lake, GCP BigQuery)
Experience working with big data frameworks such as Apache Spark and Hadoop for large-scale data processing.
Familiarity with
data pipeline and workflow tools
Experience with
API integration
and
data automation scripts (Selenium, Python, etc)
Solid grounding in
probability, statistics, and linear algebra
Understanding of
hypothesis testing, confidence intervals, and sampling methods
Soft Skills- 20%
Strong communication skills; both written and verbal
Ability to develop and present new ideas and conceptualize new approaches and solutions
Excellent interpersonal relations and demonstrated ability to work with others effectively in teams
Demonstrated ability to work with functional and technical teams Demonstrated ability to participate in a large team and work closely with other individual team members
Proven analytical skills and systematic problem solving
Strong ability to work under pressure, work with aggressive timelines, and be adaptive to change
Displays problem-solving and analytical skills, using them to resolve technical problems
Public sector Experience- 5%
OPS(or other government) standards and processes
Must Have:
2-5 years
of professional experience in data science, data analytics, or a related quantitative field (e.g., data engineering, machine learning, or business intelligence) or equivalent.
Proven experience in
data analysis, visualization, and statistical modeling
for real-world business or research problems.
Demonstrated ability to
(feature engineering, missing data handling, outlier detection)
* Experience working with big data frameworks such as Apache Spark and Hadoop for large-scale data processing.
Beware of fraud agents! do not pay money to get a job
MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.