42 Data Engineer jobs in Vietnam
Big Data Engineer (Spark/ Hadoop/ Scala)
Posted 22 days ago
Job Viewed
Job Description
- Works on the data pipeline infrastructure that is veritably the backbone of our business li>Writing elegant functional Scala code to crunch TBs of data on Hadoop clusters, mostly using Spark
- Be owning a data pipeline deployment to clusters: on-prem or on-cloud (AWS or GCP or more). li>Be managing Hadoop clusters right from security to reliability to HA. li>Building a pluggable, unified data lake from scratch.
- Automating and scaling tasks for the Data Science team. < i>Constantly look to improve framework and pipelines, hence learning on the job is sort of a given. li>Our expertise and requirements include but are not limited to Spark, Scala, HDFS, Yarn, Hive, Kafka, Distributed Systems, Python, Datastore (Relational and NoSql) and Airflow.
Big Data Engineer (Spark/ Hadoop/ Scala)
Posted 27 days ago
Job Viewed
Job Description
- Works on the data pipeline infrastructure that is veritably the backbone of our business li>Writing elegant functional Scala code to crunch TBs of data on Hadoop clusters, mostly using Spark
- Be owning a data pipeline deployment to clusters: on-prem or on-cloud (AWS or GCP or more). li>Be managing Hadoop clusters right from security to reliability to HA. li>Building a pluggable, unified data lake from scratch.
- Automating and scaling tasks for the Data Science team. < i>Constantly look to improve framework and pipelines, hence learning on the job is sort of a given. li>Our expertise and requirements include but are not limited to Spark, Scala, HDFS, Yarn, Hive, Kafka, Distributed Systems, Python, Datastore (Relational and NoSql) and Airflow.
Big Data Engineer (Spark/ Hadoop/ Scala)
Posted 6 days ago
Job Viewed
Job Description
- Works on the data pipeline infrastructure that is veritably the backbone of our business li>Writing elegant functional Scala code to crunch TBs of data on Hadoop clusters, mostly using Spark
- Be owning a data pipeline deployment to clusters: on-prem or on-cloud (AWS or GCP or more). li>Be managing Hadoop clusters right from security to reliability to HA. li>Building a pluggable, unified data lake from scratch.
- Automating and scaling tasks for the Data Science team. < i>Constantly look to improve framework and pipelines, hence learning on the job is sort of a given. li>Our expertise and requirements include but are not limited to Spark, Scala, HDFS, Yarn, Hive, Kafka, Distributed Systems, Python, Datastore (Relational and NoSql) and Airflow.
Data Engineer
Posted today
Job Viewed
Job Description
(Mức lương: Thỏa thuận)
- Work closely with engineering teams to help to build data solutions for BAEMIN;
- Design, evaluate and implement framework, platform, and architecture that can adequately handle the needs of a rapidly growing data-driven company;
- Work with Data Scientists to design and build scalable/low latency AI-powered systems;
- Design and build both batch and real-time data pipelines for various data sources: API, flat files, databases, etc.;
- Build and maintain the data warehouse for the company;
- Improve and optimize the workloads, and processes to ensure that performance levels can support continuous accurate, reliable, and timely delivery of data products;
- Prepare data for Data Science projects;
- Monitor and optimize the data infrastructure cost to keep it at the reasonable level;
- Remain up-to-date with industry standards and technological advancements to ensure the data infrastructure are both scalable and reliable;
- Work with other teams to continuously improve the company’s data infrastructure, data warehouse;
- Coach junior team member;
- Provide suitable training for other teams (if any);
**Chức vụ**: Nhân Viên/Chuyên Viên
**Hình thức làm việc**: Toàn thời gian
**Quyền lợi được hưởng**:
1/ Attractive salary & benefits
- 12 annual leave + 4 full-paid sick leave;
- Year-end bonus, performance bonus, public holiday bonus, birthday bonus;
- Appraisal and salary review every year;
2/ Macbook provided
3/ Opportunity to be trained & worked in a leading food tech company in Viet Nam;
**Yêu cầu bằng cấp (tối thiểu)**: Đại Học
**Yêu cầu công việc**:
- Having around 3+ years of experience, preferably in big data infrastructure;
- Proficient in common big data toolset in a large-scale environment;
- Experienced in deploying ML models to production;
- Well-versed in setting up continuous integration and deployment for big data or other projects;
- Familiar with software development process and culture; strong programming skills;
- Familiar with 3rd party analytics solutions. (e.g. Amplitude, Looker, Segment, Google Tag Manager);
- Cognitive knowledge of various topics in the data domain (e.g. platforms, analysis, ML, etc);
- Knowledge and experience in both relational databases and NoSQL databases (e.g. MongoDB);
- Knowledge and experience in cloud-based infrastructure (e.g. AWS, GCP);
- Good communication skills both in Vietnamese & English;
- Open mindset on learning, and problem-solving;
**Yêu cầu giới tính**: Nam/Nữ
**Ngành nghề**: Cloud (AWS/Azure),CNTT - Phần Mềm,SQL
Đại Học
Không yêu cầu
Data Engineer
Posted today
Job Viewed
Job Description
**Responsibilities**
- Design and develop data pipelines and ETL processes to collect, process, and store large-scale data from various sources, including user interactions, web logs, and sensor data.
- Develop and maintain data architecture and infrastructure, ensuring scalability, reliability, and security of data storage and processing.
- Build and deploy machine learning models and algorithms for personalized education and user experience, using technologies such as TensorFlow, PyTorch, or Scikit-learn.
**Yêu cầu công việc**
- Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
- Minimum of 2 years of experience in data engineering, machine learning, or software development, preferably in an edtech or related industry.
- Strong programming skills in Python, Java, or Scala, and experience with SQL and NoSQL databases.
- Experience with ETL tools and frameworks, such as Apache Kafka, Apache Spark, or AWS Glue.
- Experience with machine learning frameworks and platforms, such as TensorFlow, PyTorch, or Scikit-learn.
- Experience with cloud platforms such as AWS or GCP.
- Strong problem-solving skills, analytical skills, and attention to detail.
- Excellent communication skills and ability to work in a team-oriented environment
Ø **Working time: Monday - Friday (09:00 - 17:00)**
Ø **Working location: District 5, Ho Chi Minh City**
**Tại sao ứng viên nên ứng tuyển vào vị trí này**
- All kinds of insurance according to the provisions of Labor Law (Social insurance, health insurance, public holidays, maternity leave,.)
- 12 days' annual leaves
- Accommodation allowance up to 10.000.000/month
- Meal allowance
- Sports club
- Modern working facilities with high-configuration computers, Macbooks,.
- Free parking, pantry, microwave, coffee maker,.
**Salary**: 25,000,000₫ - 45,000,000₫ per month
**Education**:
- Bachelor's (preferred)
**Experience**:
- Data Engineer: 3 years (preferred)
**Language**:
- English (preferred)
Data Engineer
Posted today
Job Viewed
Job Description
(Mức lương: 15 - 17 triệu VNĐ)
1. Build and maintain data pipelines (ETL, ELT) to integrate data from various sources: APIs, databases, CSV files,.
2. Testing and controlling input/output data to ensure the accuracy of data before providing it to end users
3. Analyze requirements and work with stakeholders to prepare data for projects to build dashboards, business analysis
4. Manage, operate, and optimize the performance of data infrastructure, limiting data access disruption
5. Debug and troubleshoot data infrastructure.
6. Monitor data warehouse operations to ensure availability.
**Chức vụ**: Nhân viên/Chuyên viên
**Hình thức làm việc**: Toàn thời gian
**Quyền lợi được hưởng**:
1. Enjoy the Company's comprehensive health program
2. Holidays: 12 days of annual leave & statutory holidays
3. Bonus: Holiday, Tet, sales,.
4. Giving gifts on: March 8 (women), April 6 (men), birthdays,.
5. Allowances: marriage, bereavement, sickness, maternity,.
6. Participate in annual activities: Year-end, New Year, Teambuilding,.
7. Working time: 8am - 5pm, Monday - Friday
**Yêu cầu bằng cấp (tối thiểu)**: Trung cấp - Nghề
**Yêu cầu công việc**:
- At least 1 year of experience using Python language, or knowing Java, Scala,.
- Experience with SQL, NoSQL and data architecture
- Experience with data pipeline and workflow tools: Airflow, Spark, Pentaho, etc.
- Ability to build and deploy APIs using frameworks such as Django, Flask, Fast API,.
- Experience working with Cloud Service such as: GCP, AWS,.
- Knowledge of Docker and working on Linux is an advantage
- Experience and knowledge of e-commerce is preferred.
**Yêu cầu giới tính**: Nam/Nữ
**Ngành nghề**: CNTT - Phần Mềm,Data Analytics,SQL
Trung cấp - Nghề
Không yêu cầu
Data Engineer
Posted 22 days ago
Job Viewed
Job Description
- Collaborate with the XTract team and other team to develop new features for the Core AI Platform (Hadoop/Kafka/Spark Stack). li> Research, design, implement, and enhance the document processing (Core AI) platform using machine learning and deep learning techniques based on customer requests or industry best practices. li> Conduct research on new technologies and apply them to applications. li> Research, train, and implement machine learning models according to requests or core system design.
Be The First To Know
About the latest Data engineer Jobs in Vietnam !
Data Engineer
Posted 22 days ago
Job Viewed
Job Description
- Collaborate with the XTract team and other team to develop new features for the Core AI Platform (Hadoop/Kafka/Spark Stack). li> Research, design, implement, and enhance the document processing (Core AI) platform using machine learning and deep learning techniques based on customer requests or industry best practices. li> Conduct research on new technologies and apply them to applications. li> Research, train, and implement machine learning models according to requests or core system design.
Data Engineer
Posted 6 days ago
Job Viewed
Job Description
- Collaborate with the XTract team and other team to develop new features for the Core AI Platform (Hadoop/Kafka/Spark Stack).
- Research, design, implement, and enhance the document processing (Core AI) platform using machine learning and deep learning techniques based on customer requests or industry best practices.
- Conduct research on new technologies and apply them to applications.
- Research, train, and implement machine learning models according to requests or core system design.
(Senior) Data Engineer
Posted today
Job Viewed
Job Description
- Create and maintain optimal data pipelines to process large data sets;
- Build analytics tools to provide insights into key business performance metrics and user behaviors;
- Work closely with Product teams to support their data needs and ad-hoc reports;
- Analyze data to extract insights and troubleshoot problems if any.
**What you will need**:
- Experience in Java/Scala (preferably) or Python programming languages;
- Experience in writing complex SQL queries;
- Experience in data modeling and relational databases such as MySQL or Postgres;
- Experience in building data pipelines (ETL);
- Knowledge of big data technologies: Apache Spark, Hadoop is a big plus;
- Familiar with web development is a plus.