Target Off Campus Drive 2025 hiring Data Engineer Job, Bangalore

Apply for Target Off Campus Drive 2025! Hiring Data Engineer Job in Bangalore (1+ years exp). Work on Spark, Scala, Python, BigQuery, ETL pipelines, and cloud data engineering for a top Fortune 500 company. Apply now via Jobformore!

Candidates who are interested in Target Off Campus Drive Job Openings can go through the below to get more information.

Key Job details of Data Engineer jobs

Company: Target

Qualifications: BE/BTech

Experience Needed: 1+ years

Job Req ID: R0000392166

Selenium Automation Training

Start Date: 28th July 2025

Click here to Join on WhatsApp:- https://bit.ly/39gGfwZ

Click here to Join on Telegram:- https://telegram.me/qaidea

Location: Bangalore

Position Overview:

Assess client needs and convert business requirements into business intelligence (BI) solutions roadmap relating to complex issues involving long-term or multi-work streams.
Analyze technical issues and questions identifying data needs and delivery mechanisms
Implement data structures using best practices in data modeling, ETL/ELT processes, Spark, Scala, SQL, database, and OLAP technologies
Manage overall development cycle, driving best practices and ensuring development of high quality code for common assets and framework components
Develop test-driven solutions and provide technical guidance and heavily contribute to a team of high caliber Data Engineers by developing test-driven solutions and BI Applications that can be deployed quickly and in an automated fashion.
Manage and execute against agile plans and set deadlines based on client, business, and technical requirements
Drive resolution of technology roadblocks including code, infrastructure, build, deployment, and operations
Ensure all code adheres to development & security standards

About you:

4 year degree or equivalent experience
1+ years of software development experience preferably in a data engineering/Hadoop development (Hive, Spark etc.)
Hands on Experience in Object Oriented or functional programming such as Scala / Java / Python
Knowledge or experience with a variety of database technologies (Postgres, Cassandra, SQL Server)
Knowledge with design of data integration using API and streaming technologies (Kafka) as well as ETL and other data Integration patterns
Experience with cloud platforms like Google Cloud, AWS, or Azure. Hands on Experience on BigQuery will be an added advantage
Good understanding of distributed storage(HDFS, Google Cloud Storage, Amazon S3) and processing(Spark, Google Dataproc, Amazon EMR or Databricks)
Experience with CI/CD toolchain (Drone, Jenkins, Vela, Kubernetes) a plus
Familiarity with data warehousing concepts and technologies.
Maintains technical knowledge within areas of expertise
Constant learner and team player who enjoys solving tech challenges with global team.
Hands on experience in building complex data pipelines and flow optimizations
Be able to understand the data, draw insights and make recommendations and be able to identify any data quality issues upfront
Experience with test-driven development and software test automation
Follow best coding practices & engineering guidelines as prescribed
Strong written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to variety of audiences

Apply Now for Target Data Engineer Jobs

How to Apply Target Off Campus Drive 2025

Click on Apply to Official Link Target Above – You will go to the Company Official site
First of all Check, Experience Needed, Description and Skills Required Carefully.
Stay Connected and Subscribe to Jobformore.com to get Latest Job updates from Jobformore for Freshers and Experienced.

Interview Questions

Technical:

  • Explain your experience with Spark and Scala in building data pipelines.
  • How would you design an ETL pipeline for high-volume streaming data?
  • What are the advantages of using BigQuery for analytics workloads?
  • Describe your experience with CI/CD in data engineering.
  • How do you ensure data quality in a pipeline you develop?

Conceptual:

  • What is the difference between ETL and ELT?
  • How does Kafka help in building scalable data pipelines?
  • Explain partitioning and bucketing in Hive.
  • What are best practices in test-driven development for data pipelines?

Behavioral:

  • Describe a time you optimized a data pipeline for performance.
  • How do you prioritize when multiple stakeholders request new data pipelines?
  • Share a situation where you identified a critical data quality issue.

Leave a Comment