職缺描述
以雲端服務架構,建立零售產業整合之數據平台。在這份工作你會需要運用許多雲端原生服務、程式撰寫能力及零售產業知識,同時與infra單位密切合作,穩定零售集團數據平台,並將數據平台及雲端服務完美結合,創造使用者即時、活用之數據環境應用於商業模式中。歡迎對雲端系統、架設大數據平台有極大興趣,且想從零售產業資料開始熟悉雲端數據之人才! 【工作內容】 職缺隸屬數據中心Team 1. 負責AWS/GCP雲端環境之數據系統。 2. 熟悉雲端數據平台的ETL Pipeline流程,發展數據的自動化處理流程及監控設計等相關技術。 3. 負責雲端數據平台穩定及架構精進,以確保日常數據的運作品質之傳輸性、高效性和準確性。 4. 負責數據與模型服務之API開發、進行CI/CD優化開發與部署流程,以確保數據的安全性和可擴展性。 5. 設計、構建和操作大規模企業的數據解決方案,在大數據架構中,使用雲端原生解決方案,構建數據生產管道。 6. 負責監控/問題解決/維運升級 雲端數據應用服務平台。 7. 其他主管交辦事項。 【經驗需求】 1. 具備兩年以上使用Python和數據庫管理(SQL和no-SQL)的經驗。 2. 具備兩年以上在AWS/GCP/Azure雲端架構經驗。 3. 具備雲端大數據平台開發和維護,資料處理、儲存、分析相關經驗。 4. 具備開發 Data pipeline/ETL經驗或具DevOps pipeline設計經驗。 5. 具備Linux作業環境經驗或Java 開發和維運經驗。 【技能需求】 1. 熟悉使用AWS/GCP雲端服務經驗。 2. 熟悉關聯式資料庫SQL及No SQL使用與撰寫經驗。 3. 熟悉Python程式開發經驗。 4. 熟悉Spark、Kafka、Container、Docker開發經驗。 5. 熟悉雲端Data pipeline/ DevOps pipeline開發經驗。 6. 熟習常用的 Open Source 工具實務經驗。 7. 擅長解決問題、除錯、故障排除,提出問題解決方案。 Build a data platform for the retail industry using cloud service architecture. In this role, you will need to utilize various cloud-native services, programming skills, and retail industry knowledge. You will work closely with the infrastructure team to ensure the stability of the retail group‘s data platform and seamlessly integrate the data platform with cloud services to create a real-time and business-oriented data environment. We welcome individuals with a strong interest in cloud systems and the establishment of big data platforms, as well as those who are eager to gain expertise in cloud data, starting from retail industry data. 【Job Description】This position is part of the Data Center Team. 1. Responsible for data systems in the AWS/GCP cloud environment. 2. Familiar with ETL pipeline processes in cloud data platforms, develop automated data processing workflows, and design monitoring solutions. 3. Responsible for the stability and architectural enhancement of the cloud data platform to ensure the quality of daily data transmission, efficiency, and accuracy. 4. Develop APIs for data and model services and optimize the CI/CD development and deployment processes to ensure data security and scalability. 5. Design, build, and operate large-scale data solutions for enterprises, using cloud-native solutions in big data architectures to construct data production pipelines. 6. Responsible for monitoring, issue resolution, and the operation and upgrade of the cloud data application service platform. 7. Other tasks assigned by the management. 【Experience】 1. Minimum of two years of experience in Python and database management (SQL and NoSQL). 2. Minimum of two years of experience in AWS/GCP/Azure cloud architecture. 3. Experience in developing and maintaining cloud-based big data platforms, including data processing, storage, and analysis. 4. Experience in data pipeline/ETL development or DevOps pipeline design. 5. Experience with Linux operating environments or Java development and maintenance. Skills Requirements: 【Skills】 1. Proficiency in using AWS/GCP cloud services. 2. Familiarity with relational databases (SQL and NoSQL) and their usage and development. 3. Proficiency in Python programming. 4. Familiarity with Spark, Kafka, Containers, Docker development. 5. Proficiency in cloud data pipeline and DevOps pipeline development. 6. Familiarity with common open-source tools and practical experience. 7. Strong problem-solving, debugging, and troubleshooting skills with the ability to propose solutions.
收合內容