44 Etl Developer jobs in Vietnam
Data Engineer
Posted today
Job Viewed
Job Description
(Mức lương: Thỏa thuận)
What you’re going to do
- Design, construct, install, test, and maintain data management systems
- Build data transformations, data model, dashboard and reports.
- Ensure that all systems meet the business/company requirements as well as industry practices
- Integrate up-and-coming data management and software engineering technologies into existing data structures
- Develop set processes for data mining, data modeling, and data production
- Research new uses for existing data
- Employ an array of technological languages and tools to connect systems together
- Install/update disaster recovery procedures
- Recommend different ways to constantly improve data reliability and quality
**Chức vụ**: Nhân Viên/Chuyên Viên
**Hình thức làm việc**: Toàn thời gian
**Quyền lợi được hưởng**:
We are the sharp-minded IT experts who tackle the trickiest software and security challenges. With more than 630 employees in our locations in Zurich (HQ), Bern, Lausanne, Budapest, Lisbon, Singapore, and Ho Chi Minh City, we make the digital business of our clients work.
As a great team, we empower each other to share, grow and succeed. The unique Adnovum spirit across locations stands for helping each other at any time, having an open door and contributing to an appreciating and trustful atmosphere. We always enjoy having a laugh, a coffee or a drink together!
Apart from our unique «one Adnovum» spirit, we offer a solution-oriented engineering culture with flat hierarchies, which gives you the opportunity to contribute with your opinions and ideas. We embrace flexible working, like the possibility to work part-time and a hybrid work model. Your continuous education and development are key to us. Therefore, we actively encourage and support individual training opportunities.
We offer
- Different customer projects using technologies like Java 8, Java EE, EJB, JPA, Hibernate, Spring, JUnit, Mockito, Eclipse RCP, WebServices (RESTful, SOAP), JavaScript, Cordova, JSF, Angular (2), HTML5, CSS 3, JQuery, etc.
- Project assignments according to your skill set and development goals
- Working side by side with highly skilled and experienced software engineers
- Collaboration with colleagues in Switzerland, Hungary, Portugal and Singapore
- Friendly working atmosphere in a well-equipped and professional IT environment
- Long-term and stable job with flexible working hours
- A competitive salary plus a performance based bonus, a premium healthcare plan and free English classes
**Yêu cầu bằng cấp (tối thiểu)**: Đại Học
**Yêu cầu công việc**:
What we’re looking for
- Bachelor’s degree in computer science or similar
- Min. 3 years’ proven experience as a Data Engineer or similar
- Proficient in Data Modelling, Data Architecture, ETL, Data warehousing, Data Lake
- Proficient in Linux/Unix and shell scripting as well as in functional programming languages
- Proficient in one or more scripting language (e.g Python, R)
- Experience in Apache Hadoop based analytics covingdata processing, access, storage, governance, security, and operations
- Experience in Cloud based Big Datatechnologies
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Prior data streaming experience with Spark/Python,.
- Knowledge in deploying microservices
- Creative thinking backed by strong analytical and problem-solving skills
**Yêu cầu giới tính**: Nam/Nữ
**Ngành nghề**: CNTT - Phần Mềm,Data Analytics,SQL
Đại Học
Dưới 1 năm
Data Engineer
Posted today
Job Viewed
Job Description
You should have excellent communication skills to work with product owners to understand data requirements, and to build ETL procedures to ingest the data into the data lake. You should be an expert at designing, implementing, and operating stable, scalable, low-cost solutions to flow data from production systems into the data lake.
If you are self-directed and comfortable supporting the data needs of multiple teams, systems and products, we would like to meet you.
**YOUR MAIN DUTIES**:
- Manage and expand our existing AWS big data infrastructure
- Build ETL processes within the AWS environment
- Perform data extraction, cleaning, transformation, and flow
- Design, build, launch and maintain efficient and reliable large-scale batch and real-time data pipelines with data processing frameworks
- Integrate and collate data silos in a manner which is scalable, compliant and cost efficient
**YOUR ATTRIBUTES**:
- English proficiency is a must
- 2+ years of experience in designing and developing solutions using AWS services such as Lambda, Glue, Athena, etc.
- Good knowledge of programming Python is a must
- Hands on working knowledge of Relational / NoSQL databases such as MySQL, PostgreSQL
- Experience with DB administration in a production environment; performance/scaling concepts and tuning best practices
- Experience working in agile development teams using Scrum methodologies
- Able to work autonomously and within a team
- Must be organized, efficient, and have good attention to details
- Must be able to show initiative to get a job done with little / no supervision
**WHAT WE OFFER**:
- English speaking multi-cultural environment
- Flexible working time
- Competitive salary and benefits
- Premium medical insurance package (option to extend to family members)
- Annual employee’s health-check
- Generous annual leave days and paid sick leave days
- Bi-Annual Performance Review
- English classes / Toastmasters Clubs
- Various trainings on soft-skills and best practices
- Annual company trips, year end party and periodic team-building activities
**Job Types**: Full-time, Contract, Permanent
Data Engineer
Posted today
Job Viewed
Job Description
Adnovum Vietnam
- Ứng Tuyển
Python Data Analyst English
- Đăng nhập để xem mức lương
- Etown 2 Building, 364 Cong Hoa, Tan Binh, Ho Chi Minh- Xem bản đồ- Linh hoạt- 9 giờ trước
**3 Lý Do Để Gia Nhập Công Ty**:
- Competitive salary + bonus
- Premium healthcare
- Free English classes
**Mô Tả Công Việc**:
**What you’re going to do**:
- Design, construct, install, test, and maintain data management systems
- Build data transformations, data model, dashboard and reports.
- Ensure that all systems meet the business/company requirements as well as industry practices
- Integrate up-and-coming data management and software engineering technologies into existing data structures
- Develop set processes for data mining, data modeling, and data production
- Research new uses for existing data
- Employ an array of technological languages and tools to connect systems together
- Install/update disaster recovery procedures
- Recommend different ways to constantly improve data reliability and quality
**Yêu Cầu Công Việc**:
**What we’re looking for**:
- Bachelor’s degree in computer science or similar
- Min. 3 years’ proven experience as a Data Engineer or similar
- Proficient in Data Modelling, Data Architecture, ETL, Data warehousing, Data Lake
- Proficient in Linux/Unix and shell scripting as well as in functional programming languages
- Proficient in one or more scripting language (e.g Python, R)
- Experience in Apache Hadoop based analytics coving data processing, access, storage, governance, security, and operations
- Experience in Cloud based Big Data technologies
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Prior data streaming experience with Spark/Python,.
- Knowledge in deploying microservices
- Creative thinking backed by strong analytical and problem-solving skills
**Tại Sao Bạn Sẽ Yêu Thích Làm Việc Tại Đây**:
We are the sharp-minded IT experts who tackle the trickiest software and security challenges. With more than 630 employees in our locations in Zurich (HQ), Bern, Lausanne, Budapest, Lisbon, Singapore, and Ho Chi Minh City, we make the digital business of our clients work.
As a great team, we empower each other to share, grow and succeed. The unique Adnovum spirit across locations stands for helping each other at any time, having an open door and contributing to an appreciating and trustful atmosphere. We always enjoy having a laugh, a coffee or a drink together!
Apart from our unique «one Adnovum» spirit, we offer a solution-oriented engineering culture with flat hierarchies, which gives you the opportunity to contribute with your opinions and ideas. We embrace flexible working, like the possibility to work part-time and a hybrid work model. Your continuous education and development are key to us. Therefore, we actively encourage and support individual training opportunities.
**We offer**
- Different customer projects using technologies like Java 8, Java EE, EJB, JPA, Hibernate, Spring, JUnit, Mockito, Eclipse RCP, WebServices (RESTful, SOAP), JavaScript, Cordova, JSF, Angular (2), HTML5, CSS 3, JQuery, etc.
- Project assignments according to your skill set and development goals
- Working side by side with highly skilled and experienced software engineers
- Collaboration with colleagues in Switzerland, Hungary, Portugal and Singapore
- Friendly working atmosphere in a well-equipped and professional IT environment
- Long-term and stable job with flexible working hours
- A competitive salary plus a performance based bonus, a premium healthcare plan and free English classes
Data Engineer
Posted today
Job Viewed
Job Description
Carousell Group is the leading online classifieds marketplace for secondhand goods in Southeast Asia, with a mission to inspire the world to sell and buy more sustainably. Chotot Data Engineering is a sub-team of Carousell Group Data Engineering, focused on building a self-service data infrastructure platform. Our team is dedicated to creating a robust platform that allows users to easily ingest, access, and transform data into valuable insights using familiar and comfortable tools. We are currently seeking a excellent Data Engineer to join our team, you will have the opportunity to work on group level projects that deliver meaningful value and contribute to our missions in Southeast Asia
**RESPONSIBILITIES**
- Develop and maintain data exploration, visualization platform, customer data (CDP) as well as experimentation platform to support business needs and enable data-driven decision making
- Develop and maintain MLOps tools, platforms for data scientists and ML engineers to automate, standardize, and manage ML projects
- Design solutions and build self-serve data infrastructure platforms for internal data users such as data ingestion, transformation, streaming processing, event data management. The goal is to improve the efficiency and effectiveness of using data across the group through self-service capabilities
- Manage common data infrastructures components, such as orchestration tool, logging and monitoring, CI/CD, data governance
- Responsible for Cloud cost control via cloud Finops best practices
- Build, maintain and efficiently scale complex data ETL, high-throughput real-time and offline data pipeline
**Qualifications** REQUIREMENTS**
Must to have:
- 3+ years of experience in data engineering roles with BSc or MSc degree in Computer Science
- Experience building platforms for internal use, with a strong focus on self-service capabilities and efficient data processing
- Familiarity with SQL/BigQuery, Kafka, Kubernetes, Docker
- Strong proficiency in at least one of the following programming languages: Python, Scala
- Have experience using the Flask framework
- Excellent written and spoken English skills
- Ability to quickly learn and adapt to new technologies.
- Strong attention to detail, self-motivated, and responsible.
**Additional Information**
- Nice-to-Have:
- Experience with Golang
- Strong scripting ability in Bash
- Familiarity with designing and operating robust distributed systems
Data Engineer
Posted today
Job Viewed
Job Description
We're seeking a talented Data Engineer to join our growing team! You will play a crucial role in designing, building, and maintaining data pipelines across two major cloud platforms: Azure and AWS. Your expertise will ensure our data is clean, accessible, and ready for analysis.
**What you'll bring**
- Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience).
- Minimum 3 years of experience as a Data Engineer (or similar role).
- Proven experience in designing, developing, and implementing data pipelines.
- Strong understanding of data warehousing and data lake concepts.
- Proficiency in Python and Javascript for data manipulation and scripting.
- Experience with both Azure (Logic Apps, Function Apps, Blob Storage) and AWS (Step Functions, Lambda, S3, EC2, SNS) cloud platforms.
- Experience with GitHub Actions for CI/CD automation.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills in English.
LI-LQ1
**Championing diversity, equity, and inclusion**
**How we look after you**
We help take care of your today and tomorrow with industry-leading benefits, support, and services that look after your holistic health and wellbeing. We're also champions of life balance and offer flexible arrangements that work for you (role and location dependent). We're always looking for new ways of working that bring out our best, which leads to unexpected ideas. So here, you'll experience a sense of belonging, and discover autonomy, freedom, and ownership as you work alongside talented people you enjoy sharing knowledge with.
- We're proud to say we're an equal opportunity employer and welcome all applicants for employment without attention to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran, age, disability status_ or any other protected characteristic._ Should you need reasonable accommodations during the recruitment process, please let us know so that we can do our best to set you up for success.
Data Engineer
Posted 7 days ago
Job Viewed
Job Description
- Collaborate with the XTract team and other team to develop new features for the Core AI Platform (Hadoop/Kafka/Spark Stack). li> Research, design, implement, and enhance the document processing (Core AI) platform using machine learning and deep learning techniques based on customer requests or industry best practices. li> Conduct research on new technologies and apply them to applications. li> Research, train, and implement machine learning models according to requests or core system design.
Data Engineer
Posted 7 days ago
Job Viewed
Job Description
- Collaborate with the XTract team and other team to develop new features for the Core AI Platform (Hadoop/Kafka/Spark Stack). li> Research, design, implement, and enhance the document processing (Core AI) platform using machine learning and deep learning techniques based on customer requests or industry best practices. li> Conduct research on new technologies and apply them to applications. li> Research, train, and implement machine learning models according to requests or core system design.
Be The First To Know
About the latest Etl developer Jobs in Vietnam !
Data Engineer
Posted 19 days ago
Job Viewed
Job Description
- Collaborate with the XTract team and other team to develop new features for the Core AI Platform (Hadoop/Kafka/Spark Stack).
- Research, design, implement, and enhance the document processing (Core AI) platform using machine learning and deep learning techniques based on customer requests or industry best practices.
- Conduct research on new technologies and apply them to applications.
- Research, train, and implement machine learning models according to requests or core system design.
Hcmc - Data Engineer
Posted today
Job Viewed
Job Description
**Amaris Consulting** is an independent technology consulting firm providing guidance and solutions to businesses. With more than 1000 clients across the globe, we have been rolling out solutions in major projects for over a decade - this is made possible by an international team of 6000 people spread across 5 continents and more than 60 countries. Our solutions focus on four different Business Lines: Information System & Digital, Telecom, Life Sciences and Engineering. We’re focused on building and nurturing a top talent community where all our team members can achieve their full potential. Amaris is your steppingstone to cross rivers of change, meet challenges and achieve all your projects with success.
**Brief Call**: Our process typically begins with a brief virtual/phone conversation to get to know you! The objective? Learn about you, understand your motivations, and make sure we have the right job for you!
**Interviews** (the average number of interviews is 3 - the number may vary depending on the level of seniority required for the position). During the interviews, you will meet people from our team: your line manager of course, but also other people related to your future role. We will talk in depth about you, your experience, and skills, but also about the position and what will be expected of you. Of course, you will also get to know Amaris: our culture, our roots, our teams, and your career opportunities!
**Case study**: Depending on the position, we may ask you to take a test. This could be a role play, a technical assessment, a problem-solving scenario, etc.
We look forward to meeting you!
**Job description**:
**ABOUT THE JOB**:
- Responsible for the structuring and execution of the company’s in-house data construction and management
- Responsible for integration and analysis on ETL and varieties of data sources
- Participate in ETL implementation and enhancement
- Work with Business Analyst to understand the Technical & business requirements
- Work with data integration technologies such as SSIS, Azure Data Factory, Function APP.
- Work with data storage tools such as Azure Blob, Azure SQL, MongoDB.
**ABOUT YOU**:
- Bachelor or Master degree in Information Technology or Computer Science or equivalent practical work experience
- 6+ years of experience in relevant industries
- Experience in relevant domain (CRM, Big Data, Business Intelligence, Analytics Reporting)
- Experience with ETL, data management, transformation, and modelling
- Experience with Cloud Technologies
- Required knowledge in SQL, C#,.Net
- Experience with Reporting Service like SSRS
- Knowledge and experience in end-to-end project delivery, hybrid / agile delivery methodologies
- Technical expertise with data models, data mining, and segmentation techniques
- Great numerical and analytical skills
- Data engineering certification will be plus
- Excellent written and verbal communication in English
- Capacity to manage high stress situations
- Ability to multi-task and manage various project elements simultaneously
- Leadership skills
- Big picture thinking and vision
- Attention to detail
- Conflict resolution skill
**EQUAL OPPORTUNITY**:
Senior Data Engineer
Posted today
Job Viewed
Job Description
GFT Technologies Vietnam
- Ứng Tuyển
AWS Python SQL
- Đăng nhập để xem mức lương
- 29A Nguyen Dinh Chieu, District 1, Ho Chi Minh- Xem bản đồ- Linh hoạt- 2 giờ trước
**3 Lý Do Để Gia Nhập Công Ty**:
- We build a professional & fun working environment.
- We focus on your growth, yes the long-term growth.
- We develop the future-ready digital bank platform.
**Mô Tả Công Việc**:
- Experience as a Data Engineer, preferably in the financial services industry
- Experience with running containerized workloads on Kubernetes
- Worked in an Agile environment with continuous delivery
- Excellent problem-solving and communication skills
- Ability to work effectively in a team-oriented environment and collaborate with cross-functional teams
**Yêu Cầu Công Việc**:
- Strong experience with Python, Pandas, and PySpark, in Airflow and building ETL workflows
- Strong SQL skills and experience with relational and non-relational databases
- Knowledge of data modeling, data warehousing, and BI concepts
- Practical knowledge of Agile processes
- Experience working in financial or regulated environment
**Tại Sao Bạn Sẽ Yêu Thích Làm Việc Tại Đây**:
**HR benefits**
- Competitive salary
- Salary band per level and employee benefits are reviewed once per year
- 13th month salary pro rata depending on the employee’s length of service (within a calender year), paid with the December salary
- Monthly lunch allowance: 700,000 VND/employee
- Parking: GFT covers the monthly parking fee for employee motorbikes
- Performance evaluation is once per year, for 2 purposes:
> Performance bonus > Salary increments
**Talent retention policy** (Retention bonus)
- 2-year anniversary = 0.5x monthly salary
- 3-year anniversary = 1x monthly salary
- 5-year anniversary = 2x monthly salary
- Paid with salary of month of anniversary.
**Health care**
- Private health insurance: including accident, outpatient, in-patient, maternity, and dental for all permanent employees who pass 2-month probation.
- Optical: expense claim for eyewear
- Annual health check-ups.
**Vacation**
- Maximum 18-day vacation leave/year (with the ability to carry over 05 days till 31st March of the following year)
- Adding one more annual leave day for each two-year anniversary.
**Healthy lifestyle**
- Sports and hobby clubs: company has an annual fund for fitness activities, which is allocated per month as team’s vote.
- Range of healthy snacks, tea, coffee, milk and beer on tap:
> Tea, coffee and milk are available at pantry area - WeWork
> Beer is available at pantry area - WeWork
> Snack is available in GFT office.
**Social**
- Company townhall: each 6 weeks
- Monthly team lunch at restaurants
- Monthly team engagement activities: one activity per month
- CSR activities: as per company’s CSR guideline and practice
- Hackathon: once per year
- Onsite tour/training courses at other GFT offices and client’s destination overseas (where applicable).