DATA TALKS

Proqramda iştirakçılar bu ekosistemdən olan təcrübəli yerli və xarici spikerlərdən data sahəsi haqqında ətraflı məlumat əldə edəcək, bilik və bacarıqlarını inkişaf etdirəcəklər.
Qeydİyyatdan keç

Data Talks

Post 1

Cavid Hüseynov

Kapital Bank, Risk modelləri üzrə senior data scientist

Data sahəsində kariyerasına 2020 ci ildə Unibankda Risklərin idarə edilməsi departamentində modelləşdirmə üzrə başlamışdır, hal hazırda Kapital bankda risk texnologiyaları şöbəsində Risk modelləri üzrə senior Data scientist vəzifəsində Çalışır. Cavid bəy tədbirdə Risk modelləri haqqında bilik və təcrübələrini bölüşəcək.

Post 1

Ülvi Salman

Paşa Siğorta şirkəti, Data Elmləri İstiqamətinin rəhbəri

4 ildən çoxdur ki, Data Elmləri ilə məşğuldur. Son 2 ildir ki, həm də data elmləri və analitikası tədris edir. Hal hazırda Paşa Siğorta şirkətində Data Elmləri İstiqamətinin rəhbəri vəzifəsində çalışır.4 ildən çoxdur ki, Data Elmləri ilə məşğuldur. Son 2 ildir ki, həm də data elmləri və analitikası tədris edir. Hal hazırda Paşa Siğorta şirkətində Data Elmləri İstiqamətinin rəhbəri vəzifəsində çalışır.

Agenda (19.12.2023)

19:00 - 19:10 Qeydiyyat

19:10 - 19:15 Açılış

19:15 - 20:00 Ülvi Salman (Siğorta sektorunda siğorta hadisələrinin idarəedilməsi prosesinin avtomatlaşdırılması)

20:00 - 20:30 Kofe fasiləsi

20:30 - 21:15 Cavid Hüseynov (Risk modelləri)

Bir Öncəki Tədbirdən Görüntülər

Təqvim

Web Development
19 sentyabr

Data Talk

Web Development
16 oktyabr

Data Workshop

Web Development
17 oktyabr

Data Workshop

Web Development
7 dekabr

Data Panel

Web Development
19 dekabr

Data Talks

Təşkİlatçılar

Kapital Bank Azərbaycan Əmanət Bankının varisi kimi 140 ildən çox, uğurla fəaliyyət göstərir. Hazırda Kapital Bank Azərbaycanda ən böyük xidmət şəbəkəsinə malik maliyyə qurumudur. Universal bank olan Kapital Bank 5 milyondan çox fiziki və 22 mindən artıq hüquqi şəxslərə xidmət göstərir. Eyni zamanda, Kapital Bank dövlətin həyata keçirdiyi bir sıra sosial proqramlarda yaxından iştirak edir və real sektorun inkişafı üzrə bir sıra proqramları həyata keçirir.

SUP VC startapların Azərbaycanda böyüməsinə və beynəlxalq bazara çıxmasına kömək edən intensiv akselerasiya mərkəzidir. Dünyada onlarla mentorluq şəbəkəsinə malik olan SUP VC Azərbaycanda bilik, beynəlxalq təcrübə və innovativ sahibkarlıq imkanlarını təqdim edir.

VAKANSİYALAR

About the role:

As a Data Quality Engineer at Kapital Bank, you will play a critical role in ensuring the accuracy, consistency, and reliability of our data assets. Your primary responsibility will be to design, implement, and maintain data quality processes and controls to support the bank's operations, regulatory compliance, and business intelligence initiatives. You will collaborate with cross-functional teams to identify and rectify data quality issues, implement data quality best practices, and continuously improve data quality standards.

Key Responsibilities:

1. Data Quality Assessment:

- Conduct comprehensive data quality assessments to identify anomalies, inconsistencies, and inaccuracies in the bank's data;

- Develop and maintain data quality metrics and reports to monitor data health over time.

2. Data Quality Improvement:

- Design and implement data cleansing, transformation, and validation processes to ensure data accuracy and consistency;

- Develop and execute data quality tests, scripts, and procedures to validate data quality rules and standards.

3. Data Quality Framework:

- Establish and maintain a robust data quality framework, including data quality policies, procedures, and guidelines;

- Define and document data quality standards and best practices.

4. Data Governance:

- Collaborate with data governance team to enforce data quality standards and compliance with regulatory requirements;

- Assist in the development and maintenance of data quality policies and procedures in alignment with industry standards and regulations.

5. Data Documentation:

- Document data quality rules and metadata to facilitate data understanding and transparency;

- Maintain documentation related to data quality processes and improvements.

6. Data Quality Monitoring:

- Monitor data quality in real-time or batch processing environments, and implement alerts and notifications for critical data quality issues;

- Develop and maintain data quality dashboards and reporting for stakeholders.

Qualifications:

- Bachelor's degree in Computer Science, Information Technology, or a related field;

- Proven experience as a Data Quality Engineer or in a similar role within the banking or financial industry;

- Strong knowledge of data quality principles, data governance, and data management concepts;

- Proficiency in SQL, data quality tools, and data profiling techniques;

- Excellent problem-solving and analytical skills;

- Strong communication and collaboration skills;

- Knowledge of regulatory requirements relevant to the banking industry is a plus.

About the role:

A Data Governance Specialist is responsible for developing, implementing, and enforcing policies and procedures that ensure that data is used and maintained properly within an organization. This includes ensuring that data is accurate, complete, reliable, and secure. Data Governance Specialists also work to promote the understanding and use of data governance principles throughout the organization.

Responsibilities:

• Develop and maintain data governance policies and procedures;

• Implement and enforce data governance policies and procedures;

• Monitor and assess data governance compliance;

• Educate and train employees on data governance principles and procedures;

• Collaborate with other departments to ensure that data governance is integrated into all aspects of the organization;

• Develop and implement data quality standards;

• Manage data dictionaries and other data governance artifacts;

• Identify and mitigate data risks;

• Ensure compliance with data privacy and security regulations.

Qualifications:

• Bachelor's degree in a related field, such as business administration, information technology or computer science;

• 3+ years of experience in data governance or a related field;

• Strong understanding of data governance principles and best practices;

• Experience with data modeling, data quality management and data security;

• Excellent communication and interpersonal skills.

About the role:

We are seeking a highly skilled and motivated Data Operations Engineer to join our dynamic team. In this role, you will be the cornerstone of our data platform operations, providing first-line support for data platforms such as Dataiku, Tableau, and Oracle BI. Your expertise will be vital in administering data tools, ensuring data backup, switchover maintenance, and other critical functions to maintain the seamless operation of our data infrastructure. Your role will also encompass supporting tasks related to Impala, Debezium, Kafka, Spark, and Greenplum.

Responsibilities:

1. First-Line Support:

• Serve as the first point of contact for all issues related to data platforms including Dataiku, Tableau, Oracle BI, Impala, and Greenplum;

• Collaborate with cross-functional teams to troubleshoot and resolve platform issues promptly;

• Assist users with platform-related queries and provide guidance on best practices.

2. Administration of Data Tools:

• Serve as the first point of contact for all issues related to data platforms including Dataiku, Tableau, Oracle BI, Impala, and Greenplum;

• Collaborate with cross-functional teams to troubleshoot and resolve platform issues promptly;

• Assist users with platform-related queries and provide guidance on best practices.

3. Backup and Recovery:

• Develop and implement data backup strategies to ensure data integrity and availability;

• Conduct regular backup operations and periodically test recovery procedures;

• Collaborate with IT teams to ensure secure and efficient data storage solutions.

4. Switchover Maintenance:

• Plan and execute switchover maintenance activities to minimize downtime;

• Collaborate with the IT department to ensure seamless transition during switchover operations;

• Document switchover procedures and maintain a log of maintenance activities.

5. Support with Advanced Data Technologies:

• Provide support for tasks related to Impala, ensuring optimal performance and integration with other platforms;

• Assist with the configuration and maintenance of Debezium for real-time data streaming and change data capture;

• Support Kafka implementations, assisting with setup, monitoring, and troubleshooting;

• Collaborate with teams to optimize Spark processes for data processing and analytics;

• Assist with the administration and optimization of Greenplum database environments.

6. Other Support Activities:

• Collaborate with data teams to optimize data workflows and processes.

Qualifications:

• Bachelor's degree in Computer Science, Information Technology, or a related field;

• 1-3 years of experience in data operations or a similar role;

• Proficiency in working with data platforms such as Dataiku, Tableau, Oracle BI, Impala, and Greenplum;

• Strong knowledge of data backup and recovery procedures;

• Experience in administering data tools and technologies;

• Excellent problem-solving skills and the ability to work under pressure;

• Strong communication skills, with the ability to convey complex information clearly and effectively;

• A team player with a proactive approach to tasks.

Preferred Skills:

• Certifications in Dataiku, Tableau, or Oracle BI are a plus;

• Experience in scripting languages such as Python or Shell scripting;

• Familiarity with cloud platforms like AWS;

• Hands-on experience with Impala, Debezium, Kafka, Spark, and Greenplum.

About the role:

A Data Scientist is a professional who uses their skills in mathematics, statistics, computer science, and domain knowledge to extract knowledge and insights from data. Data Scientists use this knowledge to solve real-world problems and make better decisions.

Responsibilities:

• Collect, clean, and prepare data for analysis;

• Develop and apply statistical and machine learning models to data;

• Analyze data to identify patterns and trends;

• Communicate findings and recommendations to stakeholders;

• Work with other data professionals to build and maintain data pipelines and infrastructure.

Qualifications:

• Master's degree in a related field, such as computer science, statistics, or mathematics;

• 3+ years of experience in data science or a related field;

• Strong programming skills in Python, R, or another programming language;

• Strong understanding of statistics and machine learning;

• Experience with data mining, data visualization, and data storytelling;

• Excellent communication and interpersonal skills;

• Ability to work independently and as part of a team.

Desired Skills:

• Experience with cloud computing platforms, such as AWS or Azure;

• Experience with big data processing frameworks, such as Hadoop or Spark;

• Experience with natural language processing (NLP) or computer vision;

• Experience with productionizing data science models;

• Experience with open source data science tools and libraries.

About the role:

Kapital Bank is seeking a highly motivated and experienced Business Data Analyst to join Analytics Center of Excellence within Data Management Office. Business Analyst will work closely with the Product Team to help identify new business opportunities, perform market research, and conduct feasibility studies and enable data driven decision making. The position requires a blend of data and business skills, including the ability to analyze and interpret data, develop insights and recommendations, and effectively communicate findings to stakeholders.

Responsibilities:

• Work collaboratively with the Product Team to identify new business opportunities and growth strategies;

• Conduct market research and analysis to understand market trends, customer needs, and competitor activity;

• Analyze data to identify opportunities for improving product performance and customer satisfaction;

• Develop financial models and feasibility studies to evaluate new product ideas and business initiatives;

• Prepare reports and presentations that summarize findings and recommendations for management and other stakeholders;

• Monitor industry trends and regulatory changes that may impact the retail banking business;

• Participate in cross-functional teams and collaborate with other departments, including Marketing, Sales, and Operations, to support business objectives.

Qualifications:

• Bachelor's degree in Business, Finance, Economics, or a related field;

• Minimum of3 years of experience in retail banking or a related industry;

• Strong analytical skills and experience in data analysis, modeling, and forecasting;

• Experience in calculating income and expenses of products and campaigns, and developing financial projections;

• Ability to prepare business plans based on analytical research, market trends, and customer needs;

• Excellent written and verbal communication skills, with the ability to effectively communicate complex financial information to stakeholders at all levels;

• Ability to work independently and as part of a team, with strong interpersonal and collaboration skills;

• Proficient in Microsoft Excel, PowerPoint, and other relevant software applications.

Technical skills:

• Knowledge of statistical analysis, SQL and visualization tools (Tableau, PowerBI, similar) is must;

• Coding skills such as Python, R, SAS are advantageous.

About the role:

We are looking for a talented and experienced Data Engineer to join our team. The ideal candidate will have a strong understanding of data engineering principles and practices, as well as experience with cloud computing platforms and big data processing frameworks. The Data Engineer will be responsible for designing, building, and maintaining our data infrastructure, which will enable data scientists and analysts to extract knowledge and insights from data.

Professional skills and qualifications:

Degree in computer science, information science, engineering, mathematics, or related technical discipline.

• Experience with SQL and NoSQL technologies (Preferably Postgres and Oracle);

• Hands-on experience with Apache Nifi ETL process design, implementation, and use of ETL tools;

• Hands-on experience with Airflow or similar;

• Hands-on experience with Apache Spark;

• Hands-on experience with Greenplum administration;

• Experience in programming in Python, SQL, PySpark;

• Experience with data integration (ETL/ELT) concepts;

• Deep understanding of MPP architecture concepts;

• Experience with dbt modelling;

• Experience with SQL optimization;

• Familiarity with lakehouse concepts;

• Familiarity with version control tools.

The following experience will be considered as an advantage:

• Experience with AWS and/or GCP cloud services;

• Experience with data visualization tools (e.g. Tableau, PowerBI, etc.);

• Understanding of DataOps and DevOps principles;

• Ability to create and maintain data pipelines;

• Knowledge of data modeling and database design;

• Experience with data engineering best practices, such as data security, data access control and data governance.

Candidate should be responsible for:

• Designing a data lake to process and store an array of (un)structured data loaded from raw sources;

• Development of the data model - properly store the data and access it as needed for business purposes;

• Development of the integration process - development of integration with various systems so that they can have a single view of key indicators when making decisions;

• Data preparation and ETL - development of a pipeline for extracting, transforming and loading data;

• Developing and monitoring data pipelines to ensure data quality and integrity;

• Automation and optimization of the data transformation process;

• Working closely with other teams in the organization to ensure data is available and accessible for analytics and decision-making.

About the role:

We are looking for a talented and experienced Data Platform Engineer to join our team. The ideal candidate will have a strong understanding of machine learning principles and practices, as well as experience with DevOps and CI/CD practices. The Data Platform Engineer will be responsible for building and maintaining the infrastructure and pipelines that support our machine learning models in production.

Professional skills and qualifications:

• Administration of Linux OS servers;

• Understanding of Docker, docker-machine, docker-compose;

• Understanding of Kubernetes / OpenShift technology;

• Monitoring Grafana + Prometheus + Zabbix;

• Understanding of git and branching strategies;

• At least one scripting language as Python;

• Deep understanding of MPP architecture concepts.

The following experience will be considered as an advantage:

• Good understanding of MlOps concepts;

• Experience with MlFlow or similar;

• Good understanding of data processing principles;

• Good understanding of software development lifecycle;

• Nice to have experience with Greenplum or similar.