Senior Data Engineer

Posted 19 April by hackajob
Featured

Register and upload your CV to apply with just one click

hackajob is a matching platform partnering with multiple companies helping them to hire the best talent and build the future. To get the chance to get matched to this role and other similar roles, set up your free hackajob profile.

This company empowers public sector organisations to deliver award-winning services for citizens and they need passionate people to help them. They want to positively impact the future of the country by using technology to improve society, for everyone. They are already working with brilliant public servants to modernise technology and accelerate digital delivery. But they know they can do more to help those who share their vision. Use your skills to transform our society - join them in delivering technology that positively impacts the future of the UK.

Your role

  • Define, shape and perfect data strategies in central and local government
  • Help public sector teams understand the value of their data, and make the most of it
  • Establish yourself as a trusted advisor in data driven approaches using public cloud services like AWS, Azure and GCP
  • As employee growth is a huge focus here, they would expect you to contribute to their recruitment efforts and take on line management responsibilities

What skills and experience are they looking for?

They are looking for candidates with a range of skills and experience, please apply even if you don’t meet all the criteria as if unsuccessful they can provide you with feedback.

  • Enthusiasm for learning and self-development
  • Proficiency in Git (inc. Github Actions) and able to explain the benefits of different branch strategies
  • Gathering and meeting the requirements of both clients and users on a data project
  • Strong experience in IaC and able to guide how one could deploy infrastructure into different environments
  • Owning the cloud infrastructure underpinning data systems through a DevOps approach
  • Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop
  • Good understanding of the possible architectures involved in modern data system design (e.g. Data Warehouse, Data Lakes and Data Meshes) and the different use cases for them
  • Ability to create data pipelines on a cloud environment and integrate error handling within these pipelines. With an understanding how to create reusable libraries to encourage uniformity of approach across multiple data pipelines.
  • Able to document and present an end-to-end diagram to explain a data processing system on a cloud environment, with some knowledge of how you would present diagrams (C4, UML etc.)
  • To provide guidance how one would implement a robust DevOps approach in a data project. Also would be able to talk about tools needed for DataOps in areas such as orchestration, data integration and data analytics.
  • Experience in improving resilience into a project by checking for software vulnerabilities and implement appropriate testing strategies (unit, integration, data quality etc.)
  • Knowledge of SOLID, DRY and TDD principles and how to practically implement these into a project.
  • Agile practices such as Scrum, XP, and/or Kanban
  • Designing and implementing efficient data transformation processes at scale, both in batch and streaming use cases
  • Owning the cloud infrastructure underpinning data systems through a DevOps approach
  • Agile practices such as Scrum, XP, and/or Kanban
  • People skills such as mentoring, supportive team player and performing line management duties
  • To be able to demonstrate a commercial mindset when on projects to grow accounts organically with senior stakeholders

Desirable experience

Experience in the following things isn’t essential, but it’s highly desirable!

  • Working at a technology consultancy
  • Working with Docker and virtual environments as part of the development and CI/CD process.
  • Working with senior stakeholders to gather requirements and keep them engaged with
  • Experience in working with a team of engineers using a variety of techniques such as pair programming or mob programming.
  • Working with data scientists to productionise advanced data deliverables, such as machine learning models
  • Working knowledge of statistics
  • Working with multidisciplinary digital and technology teams
  • Working within the public sector
  • Working with data scientists to productionise advanced data deliverables, such as machine learning models

Benefits

They are always listening to their growing teams and evolving the benefits available to their people. As they scale, as do their benefits and they are scaling quickly. They've recently introduced a flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan. They're also big on connection and have an optional social and wellbeing calendar of events for all employees to join should they choose to.

Here are some of their most popular benefits listed below:

  • ?? 30 days Holiday - they offer 30 days of paid annual leave
  • ???Flexible Working Hours - they are flexible with what hours you work
  • ??Flexible Parental Leave - they offer flexible parental leave options
  • ????? Remote Working - they offer part time remote working for all their staff
  • ?? Paid counselling - they offer paid counselling as well as financial and legal advice.

Required skills

  • Data Analysis
  • Python
  • Spark

Reference: 52502187

Please note Reed.co.uk does not communicate with candidates via Whatsapp, and we will never ask you to provide your bank, passport or driving licence details during the application process. To stay safe in your job search and flexible work, we recommend visiting JobsAware, a non-profit, joint industry and law enforcement organisation working to combat labour market abuse. Visit the JobsAware website for information and free expert advice for safer work.

Report this job