Technology Architect 0421 0514

Toronto, ON, CA, Canada

Job Description

HM Note: This hybrid contract role is three (3) days in office. Candidate resumes must include first and last name, email and telephone contact information.



Description

Background Information:

The Analytics Data Hub (ADH) uses Azure Synapse and is a platform for enabling streamlined analytics to improve insights discovery and data-driven decision making, improving delivery of care and health outcomes for Ontarians. The Enterprise Data Warehouse (EDW) is based on Oracle and is a legacy data warehouse originally developed at Cancer Care Ontario, primarily containing data related to cancer patients in Ontario. This role is critical in shaping some of the current technical strategies/technologies on the team, as well as the future of the ADH and EDW platforms. This role will help support many ongoing projects, including some future projects that are on the ADH/EDW roadmap. These projects include, but are not limited to: Establish PowerBI Integration with ADH (i.e. implement pipeline between PowerBI and ADH) Establish PHI Data Aggregation Guidelines (i.e. develop process and governance to support OH analytics teams) Establish a process to provide secure PHI disclosure to external stakeholders (i.e. determine long-term technical solution(s) to provide PHI-level data to external stakeholders) Metadata Tool in Production (i.e. moving the previous Proof of Concept into Production state) ADH Presentation Zone Onboarding (i.e. plan to onboard existing ADH tenants to the presentation zone) Future ADH/EDW Data Lake (i.e. plan for next data lake, including strategy moving ADH/EDW to that platform) In addition, this role may support future strategies for PowerBI, Microstrategy, as well as SAS.

Must haves:

Hands-on experience with data lake platforms (e.g., Microsoft Azure Synapse and Fabric, Databricks, Amazon S3/Lake Formation, Google BigLake, etc.) and ETL/ELT tools (e.g., Informatica, Azure Data Factory, etc.), with a strong understanding of data pipeline orchestration and cloud-native architectures. Understanding of metadata management, lineage tracking, and data classification frameworks

Responsibilities:

Maintain scalable, secure, and highly available data lake architecture to support enterprise analytics, including supporting the evaluation, design and implementation of OH's next data lake Ensure cloud infrastructure (e.g., AWS, Azure, GCP) and storage configurations are optimized for performance, cost-efficiency, and compliance. Manage the selection, deployment, and lifecycle of data engineering tools and platforms (e.g., ETL/ELT pipelines, orchestration tools, data catalogs, monitoring tools). Drive automation and operational excellence through CI/CD for data pipelines Oversee the integration of internal and third-party data sources into the data lake Partner with data governance and security teams to enforce data privacy, access controls, and regulatory compliance. Work with ADH/EDW teams and cross-functional teams to implement deliverables from the internal ADH/EDW roadmap

Desired Skills:

Awareness of emerging technologies, trends and directions Experience translating business requirements into reporting needs Experience preparing conceptual, logical and/or physical processes and data models Excellent analytical, problem-solving and decision-making skills; verbal and written communication skills; interpersonal and negotiation skills A team player with a track record for meeting deadlines

Required Skills:

Leadership experience in the development and implementation of technical architectures Experience in structured methodologies for the design, development and implementation of applications Extensive experience in systems analysis and design in large systems environments Knowledge and experience designing processes around ITIL and is able to guide others using this methodology Experience developing, recommending, implementing and managing technical architecture Experience in developing enterprise architecture deliverables (e.g. models)

Required Experience / Evaluation Criteria:


Total Evaluation Criteria: 100 PointsHands-on experience with data lake platforms (e.g., Microsoft Azure Synapse and Fabric, Databricks, Amazon S3/Lake Formation, Google BigLake, etc.) and ETL/ELT tools (e.g., Informatica, Azure Data Factory, etc.), with a strong understanding of data pipeline orchestration and cloud-native architectures. :

40 Points

Experience with metadata management, lineage tracking, and data classification frameworks.:

20 Points

Leadership experience in the development and implementation of technical architectures.:

20 Points

Strong written and oral communications skills. Must be able to communicate development strategy to technical and non-technical group members and communicate any issues found during development clearly.:

20 Points


Deliverables:


The following is not an exhaustive list of deliverables, as some may be added during the duration of the contract. Some deliverables include:Establish PowerBI Integration with ADH Establish PHI Data Aggregation Guidelines Determine long-term technical solution(s) to provide PHI-level data to external stakeholders) Implement Metadata Tool in Production for ADH Determine Future ADH/EDW Data Lake (i.e. plan for next data lake, including strategy moving ADH/EDW to that platform) Review and recommendation on other analytics tools (e.g. SAS, Anaconda, etc.)

Knowledge Transfer Details:

The resource will ensure full knowledge transfer is provided to the Ontario Health team before end of engagement. Some of this might occur at the end of the engagement but will also be shared as information is obtained/consolidated. Key deliverables will be shared with team. The resource must provide all related documentation as part of knowledge transfer protocol. Documents will be reviewed by the appropriate leads and signed off by manager/director. The resource will work collaboratively with the Ontario Health team throughout the assignment and ensure key deliverables, milestones, and documentation are shared. A walkthrough of any demos, development, etc. will be required before the end of the engagement.

Must Haves:

7+ years hands-on experience with data lake platforms (e.g., Microsoft Azure Synapse and Fabric, Databricks, Amazon S3/Lake Formation, Google BigLake, etc.) and ETL/ELT tools (e.g., Informatica, Azure Data Factory, etc.), with a strong understanding of data pipeline orchestration and cloud-native architectures. * 7+ years experience understanding metadata management, lineage tracking, and data classification frameworks

Beware of fraud agents! do not pay money to get a job

MNCJobz.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD3049291
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Contract
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Toronto, ON, CA, Canada
  • Education
    Not mentioned