Jobs

Data Engineer Job at ParallelScore

Data Engineer Job at ParallelScore: This is to inform the general public that ParallelScore offers Job in the area of Engineering / Technical. See, application deadline,work level,job specialization,job location,job type and method of application below!

JOB SUMMARY BELOW

  • Company: ParallelScore
  • Deadline Date: 17th June, 2022
  • Specialization: Engineering / Technical
  • Work Level: Experienced (Non-Manager)
  • Job Type: Full-Time
  • Experience:
  • Location(s): Lagos

Job/Company Description:

Parallel Score is a product development firm that develops data and user-centric solutions by leveraging design, engineering, and innovative thinking. We are a provocative product development agency that is focused on imagining and building highly-interactive and user- driven experiences that push the limits of user design and development.

We are recruiting to fill the position below:

Job Position: Data Engineer

Job Location: Lagos
Job Category: Full-time

Job Description

  • Our company is looking to fill the role of a Data Engineer.
  • To join our growing team, please review the list of responsibilities and requirements.

Job Responsibilities

  • The ability to clean, analyze, integrate and process complex data
  • Experience in analytical, machine learning, and statistical modeling techniques, in identifying patterns and signals (desirable)
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS/Azure ‘big data’ technologies.
  • Good at communicating to all levels, with the ability to share information and transfer knowledge and expertise to other colleagues
  • Open-minded and able to see the bigger picture
  • Collect and mine data to provide a better user experience
  • Mine massive datasets that are high-dimensional graph-based
  • Deploy data processing and analysis pipelines
  • Capability to self-learn new technologies in a 4-week timeframe
  • Design, build and maintain all parts of the data warehouse infrastructure to support adaptive streaming analysis (requirements gathering, ETL, data modeling, metric design, reporting/dash-boarding, etc).

Job Requirements

  • Experience with big data tools: Databricks, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases.
  • Experience with data pipeline and workflow management tools
  • Experience with AWS or Azure cloud services related to big data and data migration
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented / object function scripting languages: Python, Java, Javascript, etc.

Skills:

  • Statistical concepts
  • Administrative Coding
  • Agile development methodology
  • AWS and Azure platform
  • Data modeling best practices and the ability to apply an analytical & structured approach
  • Data pre-processing
  • Data-bricks, Delta lake, Hadoop, Spark, Kafka,
  • Feature engineering.

CLICK TO APPLY:

Related Articles

Leave a Reply

Back to top button