Data Engineering

The data revolution has resulted in a massive growth in data-driven applications and decision-making processes. Data Engineers are the highly skilled professionals who design and maintain the critical data pipelines that makes all this possible.

Our customisable data engineering courses are designed to equip participants with vital hands-on experience of the key data technologies that are central to your organisation.

Covering a wide range of technologies and culminating in a Capstone project that simulates a real-world Data Engineering challenge. Participants finish this course ready to hit the ground running as Data Engineering professionals.

Data Engineering

Course Details

Our Data Engineering modules can be tailored to your organisational needs.

Popular Modules Include:

  • SQL and Relational Modelling. From fundamentals to advanced, we offer modules on all popular relational database systems, including Oracle, PostgreSQL, MySQL, and Sybase.
  • NoSQL storage and data modelling. Choose from modules covering all popular NoSQL storage systems including MongoDB, Cassandra and HBase.
  • Data processing and ETL (Extract Transform and Load) with Python.
  • The Hadoop Ecosystem for Big Data
  • Big Data Processing with Apache Spark
  • Streaming Data Pipelines and Event-Driven Architectures with Apache Kafka

Courses are hands-on with graded assignments and peer feedback. Assignments and projects are designed to mimic real-world tasks and organisational workflows.

All courses culminate in a Capstone project which gives participants an opportunity to put into practice what has been learned, and to verify their new skills in real-world scenarios.

What will I learn

  • Learn to make full use of the relational database tools in your organisation.
  • Understand the critical differences when storing data across different SQL and NoSQL storage engines.
  • Gain a practical understanding of creating data and automating data pipelines with Python.
  • Learn how the various elements of the Hadoop ecosystem inter-operate to provide highly scalable Big Data systems.
  • Develop vital skills with Apache Spark to create data processing batch and streaming applications that scale according to demand.
  • Learn to build high volume data pipelines with Apache Kafka and Kafka streams.

Duration

Tailored to your needs.

Delivery Method

On-site or Virtual.

Why neueda?

Neueda is committed to providing customised training to ensure that your global workforce is future proofed. We continually provide industry-leading learning solutions to fill the ever-widening skills gaps in today’s market.