Infosys New Job: Apply for the Data Engineer

Experience: 5-9 Years

Location: Multiple Locations Across India (Bangalore, Pune, Hyderabad, Mysore, Kolkata, Chennai, Chandigarh, Trivandrum, Indore, Nagpur, Mangalore, Noida, Bhubaneswar, Coimbatore, Mumbai, Jaipur, Hubli, Vizag)

Job Type: Full-Time

Service Line: Enterprise Package Application Services

Job Description:

As a Data Engineer at Infosys, you will play a key role in aiding the consulting team throughout various project phases, including problem definition, effort estimation, diagnosis, solution generation, design, and deployment. You will conduct research, build Proof of Concepts (POCs), and contribute to delivering high-quality, value-adding data solutions for clients.

Key Responsibilities:

  • Assist in different phases of the project, from problem definition to solution deployment.
  • Research and explore alternative solutions through literature reviews, vendor evaluations, and POCs.
  • Develop requirement specifications from business needs and define detailed functional designs.
  • Configure solution requirements, diagnose root causes of issues, seek clarifications, and identify solution alternatives.
  • Design, develop, and maintain scalable data pipelines on Databricks using PySpark.
  • Optimize and troubleshoot existing data pipelines for performance and reliability.
  • Ensure data quality, integrity, and compliance across various sources.
  • Collaborate with data analysts and scientists to understand data requirements and deliver solutions.
  • Monitor data pipeline performance and conduct necessary maintenance and updates.
  • Document data pipeline processes and technical specifications.

Technical & Professional Requirements:

  • 5+ years of experience in data engineering.
  • Strong proficiency in Databricks and Apache Spark.
  • Advanced SQL skills and experience with relational databases.
  • Experience with big data technologies (e.g., Hadoop, Kafka).
  • Knowledge of data warehousing concepts and ETL processes.
  • Hands-on experience with CI/CD tools (e.g., Jenkins).
  • Solid understanding of big data fundamentals and Apache Spark.
  • Familiarity with cloud platforms (e.g., AWS, Azure).
  • Experience with version control systems (e.g., BitBucket).
  • Understanding of DevOps principles and tools (e.g., CI/CD, Jenkins).
  • Databricks certification is a plus.

Preferred Skills:

  • Cloud Platforms: AWS Database
  • Big Data: Hadoop
  • Azure Analytics Services: Azure Databricks

Educational Requirements:

  • Bachelor’s Degree in Engineering or Technology (B.E./B.Tech.)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top