Apply for JLL Off Campus Drive 2025! Hiring Data Engineer Job in Bangalore for BE/BTech and help transform commercial real estate with the power of data. You will build scalable data pipelines, manage structured and unstructured datasets, and work with cutting-edge technologies like Python, Kafka, Spark Streaming, and Azure, contributing to JLL’s enterprise data strategy in a global environment.
Candidates who are interested in JLL Off Campus Drive Job Openings can go through the below to get more information.
Key Job details of Data Engineer jobs
Company: JLL
Qualifications: BE/BTech
Experience Needed: 0-1 years
Job Req ID: REQ439600
Location: Bangalore

Start Date: 28th July 2025
Click here to Join on WhatsApp:- https://bit.ly/39gGfwZ
Click here to Join on Telegram:- https://telegram.me/qaidea
Job Description
JLL Technologies Enterprise Data team is a newly established central organization that oversees JLL’s data strategy. The Data Engineering professional will work with our colleagues at JLL around the globe in providing solutions, developing new products, building enterprise reporting & analytics capability to reshape the business of Commercial Real Estate using the power of data and we are just getting started on that journey!
Data Engineer is self-starter to work in a diverse and fast-paced environment that can join our Enterprise Data team. This is an individual contributor role that is responsible for developing of data solutions that are strategic for the business and built on the latest technologies and patterns. This a global role that requires partnering with the broader JLLT team at the country, regional and global level by utilizing in-depth knowledge of data, infrastructure, technologies and data engineering experience.
As a Data Engineer 1 at JLL Technologies, you will:
Contributes to the development of information infrastructure, and data management processes to move the organization to a more sophisticated, agile and robust target state data architecture
Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data
Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
Develop good understanding of how data will flow & stored through an organization across multiple applications such as CRM, Broker & Sales tools, Finance, HR etc
Develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities
What we are looking for:
0+ years’ overall work experience and Bachelor’s degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science.
Knowledge of using Python, Kafka, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc.
Hands-on Experience for building Data Pipelines in Cloud would be a plus.
Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities.
Experience handling un-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues will be an advantage
Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on project within a fast-paced environment working with cross functional teams
How to Apply JLL Off Campus Drive 2025
Click on Apply to Official Link JLL Above – You will go to the Company Official site
First of all Check, Experience Needed, Description and Skills Required Carefully.
Stay Connected and Subscribe to Jobformore.com to get Latest Job updates from Jobformore for Freshers and Experienced.
Interview Questions
Technical
- What are data pipelines, and why are they important?
- Explain the use of Apache Kafka in real-time data processing.
- Difference between Azure SQL and Cosmos DB with use cases.
- Write a Python function to clean and normalize a dataset.
- Explain the architecture of Spark Streaming.
- How do you handle unstructured data in a data lake?
- Describe ETL vs ELT with examples.
Behavioral
- Why do you want to work as a Data Engineer at JLL?
- Describe a project where you worked on data ingestion or cleaning.
- How do you manage your learning when dealing with new technologies?