43 Big Data jobs in Vietnam
Big Data Engineer (Spark/ Hadoop/ Scala)
Posted 22 days ago
Job Viewed
Job Description
- Works on the data pipeline infrastructure that is veritably the backbone of our business li>Writing elegant functional Scala code to crunch TBs of data on Hadoop clusters, mostly using Spark
- Be owning a data pipeline deployment to clusters: on-prem or on-cloud (AWS or GCP or more). li>Be managing Hadoop clusters right from security to reliability to HA. li>Building a pluggable, unified data lake from scratch.
- Automating and scaling tasks for the Data Science team. < i>Constantly look to improve framework and pipelines, hence learning on the job is sort of a given. li>Our expertise and requirements include but are not limited to Spark, Scala, HDFS, Yarn, Hive, Kafka, Distributed Systems, Python, Datastore (Relational and NoSql) and Airflow.
Big Data Engineer (Spark/ Hadoop/ Scala)
Posted 27 days ago
Job Viewed
Job Description
- Works on the data pipeline infrastructure that is veritably the backbone of our business li>Writing elegant functional Scala code to crunch TBs of data on Hadoop clusters, mostly using Spark
- Be owning a data pipeline deployment to clusters: on-prem or on-cloud (AWS or GCP or more). li>Be managing Hadoop clusters right from security to reliability to HA. li>Building a pluggable, unified data lake from scratch.
- Automating and scaling tasks for the Data Science team. < i>Constantly look to improve framework and pipelines, hence learning on the job is sort of a given. li>Our expertise and requirements include but are not limited to Spark, Scala, HDFS, Yarn, Hive, Kafka, Distributed Systems, Python, Datastore (Relational and NoSql) and Airflow.
Big Data Engineer (Spark/ Hadoop/ Scala)
Posted 6 days ago
Job Viewed
Job Description
- Works on the data pipeline infrastructure that is veritably the backbone of our business li>Writing elegant functional Scala code to crunch TBs of data on Hadoop clusters, mostly using Spark
- Be owning a data pipeline deployment to clusters: on-prem or on-cloud (AWS or GCP or more). li>Be managing Hadoop clusters right from security to reliability to HA. li>Building a pluggable, unified data lake from scratch.
- Automating and scaling tasks for the Data Science team. < i>Constantly look to improve framework and pipelines, hence learning on the job is sort of a given. li>Our expertise and requirements include but are not limited to Spark, Scala, HDFS, Yarn, Hive, Kafka, Distributed Systems, Python, Datastore (Relational and NoSql) and Airflow.
Việc Làm Talend Engineer (Big Data)
Posted today
Job Viewed
Job Description
(Mức lương: Từ 15 triệu VNĐ)
Job description
Design, develop, test, and deploy new ETL pipelines or enhancements to existing pipelines using Talend in Big Data environment.
Translate functional requirements into technical designs.
Perform end to end automation of ETL process for various datasets that are being ingested into big data platform.
Resolve customer complaints with data and respond to suggestions for improvements and enhancements.
Recommend strategies for technical aspects of projects as well as broad system improvements.
Ensures adherence to established standards and may consult with senior managers on technology solutions.
Your skills and experience
Good English communication in both verbal and written.
5+ years of related IT experience
3+ years development experience with Talend Data Integration module
2+ years of Experience with Talend Administration Center, Data Quality, API Designer and Services, and Big Data Frameworks modules
2+ years of experience with Hadoop, Hive, HDFS and Oracle RDBMS
Experience with database schema, object management, data modeling & architecture, data warehouse design.
Strong knowledge of Java programming and PL- SQL
Experience with API integration (REST)
Working knowledge of Unix/Linux and Shell script
Working knowledge of source-code control using a tool such as GitLab
Experience with CI/CD and DevOps development using secure coding practices.
Performance tuning is a plus.
Knowledge in emerging cloud technologies related to Big Data is a plus.
Previous experience in Finance/Banking industry is a plus.
Why you'll love working here
Highly competitive salary.
Full salary for probation & full coverage of social insurance.
Premium healthcare for you and your beloved one.
Monthly childcare support
Annual Leave: 15 days; Paternity Leave: 6 weeks, Bereavement leave: 10 days.
Various active sporty clubs: Football, Billiards, Badminton, E-sport clubs
Frequent opportunities to travel to US headquarter from 3-6 months.
aperia solutions viet nam là công ty Trách nhiệm hữu hạn đang hoạt động lĩnh vực IT Phần mềm tại TPHCM. Hiện tại chúng tôi đang cần tuyển vị trị trí "(Mid/Sr) Java Developer (ReactJS, J2EE, Spring boot)", "Talend engineer (Big data)". với các kỹ năng như Test Automation, IT phần mềm, Quy Trình Kỹ Thuật.Bạn sẽ được hưởng các chế độ phúc lợi như Văn Phòng Tiện Nghi, Văn Hóa Công Ty Tốt khi làm việc tại aperia solutions viet nam.
**Chức vụ**: Nhân viên/Chuyên viên
**Hình thức làm việc**: Toàn thời gian
**Quyền lợi được hưởng**:
Văn Phòng Tiện Nghi
Văn Hóa Công Ty Tốt
Tiệc liên hoan
**Yêu cầu bằng cấp (tối thiểu)**: Trung cấp - Nghề
**Yêu cầu công việc**:
Data Analyst
IT phần mềm
Quy Trình Kỹ Thuật
**Yêu cầu giới tính**: Nam/Nữ
**Ngành nghề**: Kinh Doanh/Bán Hàng,Tư Vấn Bán Hàng,Xây Dựng
Trung cấp - Nghề
Không yêu cầu
Data Engineer
Posted today
Job Viewed
Job Description
(Mức lương: Thỏa thuận)
- Work closely with engineering teams to help to build data solutions for BAEMIN;
- Design, evaluate and implement framework, platform, and architecture that can adequately handle the needs of a rapidly growing data-driven company;
- Work with Data Scientists to design and build scalable/low latency AI-powered systems;
- Design and build both batch and real-time data pipelines for various data sources: API, flat files, databases, etc.;
- Build and maintain the data warehouse for the company;
- Improve and optimize the workloads, and processes to ensure that performance levels can support continuous accurate, reliable, and timely delivery of data products;
- Prepare data for Data Science projects;
- Monitor and optimize the data infrastructure cost to keep it at the reasonable level;
- Remain up-to-date with industry standards and technological advancements to ensure the data infrastructure are both scalable and reliable;
- Work with other teams to continuously improve the company’s data infrastructure, data warehouse;
- Coach junior team member;
- Provide suitable training for other teams (if any);
**Chức vụ**: Nhân Viên/Chuyên Viên
**Hình thức làm việc**: Toàn thời gian
**Quyền lợi được hưởng**:
1/ Attractive salary & benefits
- 12 annual leave + 4 full-paid sick leave;
- Year-end bonus, performance bonus, public holiday bonus, birthday bonus;
- Appraisal and salary review every year;
2/ Macbook provided
3/ Opportunity to be trained & worked in a leading food tech company in Viet Nam;
**Yêu cầu bằng cấp (tối thiểu)**: Đại Học
**Yêu cầu công việc**:
- Having around 3+ years of experience, preferably in big data infrastructure;
- Proficient in common big data toolset in a large-scale environment;
- Experienced in deploying ML models to production;
- Well-versed in setting up continuous integration and deployment for big data or other projects;
- Familiar with software development process and culture; strong programming skills;
- Familiar with 3rd party analytics solutions. (e.g. Amplitude, Looker, Segment, Google Tag Manager);
- Cognitive knowledge of various topics in the data domain (e.g. platforms, analysis, ML, etc);
- Knowledge and experience in both relational databases and NoSQL databases (e.g. MongoDB);
- Knowledge and experience in cloud-based infrastructure (e.g. AWS, GCP);
- Good communication skills both in Vietnamese & English;
- Open mindset on learning, and problem-solving;
**Yêu cầu giới tính**: Nam/Nữ
**Ngành nghề**: Cloud (AWS/Azure),CNTT - Phần Mềm,SQL
Đại Học
Không yêu cầu
Data Engineer
Posted today
Job Viewed
Job Description
**Responsibilities**
- Design and develop data pipelines and ETL processes to collect, process, and store large-scale data from various sources, including user interactions, web logs, and sensor data.
- Develop and maintain data architecture and infrastructure, ensuring scalability, reliability, and security of data storage and processing.
- Build and deploy machine learning models and algorithms for personalized education and user experience, using technologies such as TensorFlow, PyTorch, or Scikit-learn.
**Yêu cầu công việc**
- Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
- Minimum of 2 years of experience in data engineering, machine learning, or software development, preferably in an edtech or related industry.
- Strong programming skills in Python, Java, or Scala, and experience with SQL and NoSQL databases.
- Experience with ETL tools and frameworks, such as Apache Kafka, Apache Spark, or AWS Glue.
- Experience with machine learning frameworks and platforms, such as TensorFlow, PyTorch, or Scikit-learn.
- Experience with cloud platforms such as AWS or GCP.
- Strong problem-solving skills, analytical skills, and attention to detail.
- Excellent communication skills and ability to work in a team-oriented environment
Ø **Working time: Monday - Friday (09:00 - 17:00)**
Ø **Working location: District 5, Ho Chi Minh City**
**Tại sao ứng viên nên ứng tuyển vào vị trí này**
- All kinds of insurance according to the provisions of Labor Law (Social insurance, health insurance, public holidays, maternity leave,.)
- 12 days' annual leaves
- Accommodation allowance up to 10.000.000/month
- Meal allowance
- Sports club
- Modern working facilities with high-configuration computers, Macbooks,.
- Free parking, pantry, microwave, coffee maker,.
**Salary**: 25,000,000₫ - 45,000,000₫ per month
**Education**:
- Bachelor's (preferred)
**Experience**:
- Data Engineer: 3 years (preferred)
**Language**:
- English (preferred)
Data Engineer
Posted today
Job Viewed
Job Description
(Mức lương: 15 - 17 triệu VNĐ)
1. Build and maintain data pipelines (ETL, ELT) to integrate data from various sources: APIs, databases, CSV files,.
2. Testing and controlling input/output data to ensure the accuracy of data before providing it to end users
3. Analyze requirements and work with stakeholders to prepare data for projects to build dashboards, business analysis
4. Manage, operate, and optimize the performance of data infrastructure, limiting data access disruption
5. Debug and troubleshoot data infrastructure.
6. Monitor data warehouse operations to ensure availability.
**Chức vụ**: Nhân viên/Chuyên viên
**Hình thức làm việc**: Toàn thời gian
**Quyền lợi được hưởng**:
1. Enjoy the Company's comprehensive health program
2. Holidays: 12 days of annual leave & statutory holidays
3. Bonus: Holiday, Tet, sales,.
4. Giving gifts on: March 8 (women), April 6 (men), birthdays,.
5. Allowances: marriage, bereavement, sickness, maternity,.
6. Participate in annual activities: Year-end, New Year, Teambuilding,.
7. Working time: 8am - 5pm, Monday - Friday
**Yêu cầu bằng cấp (tối thiểu)**: Trung cấp - Nghề
**Yêu cầu công việc**:
- At least 1 year of experience using Python language, or knowing Java, Scala,.
- Experience with SQL, NoSQL and data architecture
- Experience with data pipeline and workflow tools: Airflow, Spark, Pentaho, etc.
- Ability to build and deploy APIs using frameworks such as Django, Flask, Fast API,.
- Experience working with Cloud Service such as: GCP, AWS,.
- Knowledge of Docker and working on Linux is an advantage
- Experience and knowledge of e-commerce is preferred.
**Yêu cầu giới tính**: Nam/Nữ
**Ngành nghề**: CNTT - Phần Mềm,Data Analytics,SQL
Trung cấp - Nghề
Không yêu cầu
Be The First To Know
About the latest Big data Jobs in Vietnam !
Data Engineer
Posted 22 days ago
Job Viewed
Job Description
- Collaborate with the XTract team and other team to develop new features for the Core AI Platform (Hadoop/Kafka/Spark Stack). li> Research, design, implement, and enhance the document processing (Core AI) platform using machine learning and deep learning techniques based on customer requests or industry best practices. li> Conduct research on new technologies and apply them to applications. li> Research, train, and implement machine learning models according to requests or core system design.
Data Engineer
Posted 22 days ago
Job Viewed
Job Description
- Collaborate with the XTract team and other team to develop new features for the Core AI Platform (Hadoop/Kafka/Spark Stack). li> Research, design, implement, and enhance the document processing (Core AI) platform using machine learning and deep learning techniques based on customer requests or industry best practices. li> Conduct research on new technologies and apply them to applications. li> Research, train, and implement machine learning models according to requests or core system design.
Data Engineer
Posted 6 days ago
Job Viewed
Job Description
- Collaborate with the XTract team and other team to develop new features for the Core AI Platform (Hadoop/Kafka/Spark Stack).
- Research, design, implement, and enhance the document processing (Core AI) platform using machine learning and deep learning techniques based on customer requests or industry best practices.
- Conduct research on new technologies and apply them to applications.
- Research, train, and implement machine learning models according to requests or core system design.