728x90
반응형

호남맛집 봉천동

제육

양이 많다.

2인 1식인 듯

반응형

'daily > 맛집' 카테고리의 다른 글

동원각 봉천동  (0) 2022.06.07
BHC 기숙사점  (0) 2022.06.07
제주상회 샤로수길  (0) 2022.06.07
굽네치킨 봉천동  (0) 2022.06.07
휴김밥 서울대학식  (0) 2022.06.07
728x90
반응형

제주상회 샤로수길

고기덕후 육수덕후라면 짜릿한 곳이다.

양도 많고 고기스럽고

반응형

'daily > 맛집' 카테고리의 다른 글

BHC 기숙사점  (0) 2022.06.07
호남맛집 봉천동  (0) 2022.06.07
굽네치킨 봉천동  (0) 2022.06.07
휴김밥 서울대학식  (0) 2022.06.07
도야족발보쌈 관악본점  (0) 2022.06.07
728x90
반응형

굽네치킨 봉천동

진짜 맛있다...

반응형

'daily > 맛집' 카테고리의 다른 글

호남맛집 봉천동  (0) 2022.06.07
제주상회 샤로수길  (0) 2022.06.07
휴김밥 서울대학식  (0) 2022.06.07
도야족발보쌈 관악본점  (0) 2022.06.07
휴김밥 서울대학식  (0) 2022.06.07
728x90
반응형

휴김밥 서울대학식

짬뽕라면

자리싸움이 치열하다.

반응형

'daily > 맛집' 카테고리의 다른 글

제주상회 샤로수길  (0) 2022.06.07
굽네치킨 봉천동  (0) 2022.06.07
도야족발보쌈 관악본점  (0) 2022.06.07
휴김밥 서울대학식  (0) 2022.06.07
메가커피 안산고잔동  (0) 2022.06.07
728x90
반응형

도야족발보쌈 관악본점

야근 끝에 보쌈 온다.

반응형

'daily > 맛집' 카테고리의 다른 글

굽네치킨 봉천동  (0) 2022.06.07
휴김밥 서울대학식  (0) 2022.06.07
휴김밥 서울대학식  (0) 2022.06.07
메가커피 안산고잔동  (0) 2022.06.07
호남맛집 봉천역  (0) 2022.06.07
728x90
반응형

휴김밥 서울대학식

치열한 자리싸움 끝에 얻어낸 라면이다.

반응형

'daily > 맛집' 카테고리의 다른 글

휴김밥 서울대학식  (0) 2022.06.07
도야족발보쌈 관악본점  (0) 2022.06.07
메가커피 안산고잔동  (0) 2022.06.07
호남맛집 봉천역  (0) 2022.06.07
자하연식당 서울대학식  (0) 2022.06.07
728x90
반응형

메가커피 고잔동

갑자기 끌려서 먹은 와플

쏘쏘

반응형

'daily > 맛집' 카테고리의 다른 글

도야족발보쌈 관악본점  (0) 2022.06.07
휴김밥 서울대학식  (0) 2022.06.07
호남맛집 봉천역  (0) 2022.06.07
자하연식당 서울대학식  (0) 2022.06.07
브릭스와인 문래  (0) 2022.03.15
728x90
반응형

호남맛집 봉천역

제육덮밥

굿

반응형

'daily > 맛집' 카테고리의 다른 글

휴김밥 서울대학식  (0) 2022.06.07
메가커피 안산고잔동  (0) 2022.06.07
자하연식당 서울대학식  (0) 2022.06.07
브릭스와인 문래  (0) 2022.03.15
풍광쟈창차이 중앙역점  (0) 2022.03.15
728x90
반응형


닭다리구이정식

맛있다.

반응형

'daily > 맛집' 카테고리의 다른 글

메가커피 안산고잔동  (0) 2022.06.07
호남맛집 봉천역  (0) 2022.06.07
브릭스와인 문래  (0) 2022.03.15
풍광쟈창차이 중앙역점  (0) 2022.03.15
피자네버슬립스 샤로수길점  (0) 2022.03.15
728x90
반응형

https://arxiv.org/abs/2201.00162

 

MLOps -- Definitions, Tools and Challenges

This paper is an overview of the Machine Learning Operations (MLOps) area. Our aim is to define the operation and the components of such systems by highlighting the current problems and trends. In this context, we present the different tools and their usef

arxiv.org

검색어 : MLops software development life-cycle

Abstract : 

- To define the operation and the components of such systems by highlighting the current problems and trends

- The connection between MLOps and AutoML (Automated Machine Learning) is identified and how this combination could work is proposed.

- Keywords-MLOps; AutoML; machine learning, re-training; monitoring; explainability; robustness; sustainability; fairness

 

Ⅰ. Introduction

- a way to work together and combine their knowledge in order to deploy ready for production models. This task has many difficulties and it is not easy to overcome them. This is why only a small percentage of the ML projects manage to reach production.

Ⅱ. Related Work

- In this section we will mention some of the most important and influential work in every task of the MLOps cycle.


Ⅲ. MLOps

- MLOps(machine learning operations) stands for the collection of techniques and tools for the deployment of ML models in production.

- DevOps [7] stands for a set of practices with the main purpose to minimize the needed time for a software release, reducing the gap between software development and operations

- The two main principles of DevOps are Continuous Integration (CI) and Continuous Delivery (CD).

- Continuous integration is the practice by which software development organizations try to integrate code written by developer teams at frequent intervals. So they constantly test their code and make small improvements each time based on the errors and weaknesses that results from the tests. This results in a reduction in the software development process cycle

- Continuous delivery is the practice according to which, there is constantly a new version of the software under development to be installed for testing, evaluation and then production. With this practice, the software releases resulting from the continuous integration with the improvements and the new features reach the end users much faster.

- the need to apply the same principles that govern DevOps in machine learning models became imperative. This is how these practices, called MLOps (Machine Learning Operations), came about. MLOps attempts to automate Machine Learning processes using DevOps practices and approaches.

- Although it seems simple in reality it is not. ~ a Machine Learning model is not independent but is part of a wider software system and consists not only of code but also of data.

- As the data is constantly changing, the model is constantly called upon to retrain from the new data that emerges. For this reason, MLOps introduce a new practice, in addition to CI and CD, that of Continuous Training (CT), which aims to automatically retrain the model where needed.

 

A. MLOps pipeline

- best known is the proposal of ToughWorks, which automates the life cycle of end-to-end Machine Learning applications. 

- It is ”a software engineering approach in which an interoperable team produces machine learning applications based on code, data and models in small, secure new versions that can be replicated and delivered reliably at any time, in short custom cycles”

- This approach includes three basic procedures involving: collection, selection and preparation of data to be used in model training, in finding and selecting the most efficient model after testing and experimenting with different models, in developing and sending the selected model in production. After collecting, evaluating and selecting the data that will be used for training, we automate the process of creating models and training them. This allows us to produce more than one model which we can test and experiment in order to produce a more efficient and effective model while recording the results of our tests

- Then we have to resolve various issues related to the production of the model, as well as submit it to various tests in order to confirm its reliability before developing it for production. Finally, we can monitor the model and collect the resulting new data, which will be used to retrain the model, thus ensuring its continuous improvement.

 

B. Maturity Levels

- Depending on the level of automation of a MLOps system, it can be classified at a corresponding level. These levels were named by the community maturity levels. Although there is no universal maturity model, the two main ones were created by Google and Microsoft.

- Google model consists of three levels and its structure is presented in Figure 3.

MLOps level 0: Manual process

MLOps level 1: ML pipeline automation

MLOps level 2: CI/CD pipeline automation

 

- Microsoft model consists of five levels and its structure is presented in Figure 4.

Level 1: No MLOps,

Level 2: DevOps but no MLOps,

Level 3: Automated Training,

Level 4: Automated Model Deployment,

Level 5: Full MLOps Automated Operations.


Ⅳ. Tools and Platforms

- In recent years many different tools have emerged in order to help automate the sequence of artificial learning processes [20]. This section provides an overview of the different tools and requirements that these tools meet. Note that different tools automate different phases in the machine learning workflow.

- The majority of tools come from the open source community because half of all IT organizations use open source tools for AI and ML and the percentage is expected to be around two-thirds by 2023. At GitHub alone, there are 65 million developers and 3 million organizations contributing to 200 million projects. Therefore, it is not surprising that there are advanced sets of open source tools in the landscape of machine learning and artificial intelligence. Open source tools focus on specific tasks within MLOps instead of providing end-to-end machine learning life-cycle management.

- These tools and platforms typically require a development environment in Python and R. In recent years many different tools have emerged which help in automating the ML pipeline.

- The choice of tools for MLOps is based on the context of the respective ML solution and the operations setup.

A. Data Preprocessing Tools

- Data processing tools are divided into two main categories: data labeling tools and data versioning tools. Data labeling tools (also called annotation tools, tagging or sorting data), big data labeling plans such as text, images or sound. Data labeling tools can in turn be divided into different categories depending on the task they perform. Some are designed to highlight specific file types such as videos or images [21]. Few of these tools can edit all file types. There are also different types of tags that differ in each tool. Boundary frames, polygonal annotations, and semantic segmentation are the most common features in the label market. Your choices about data labeling tools will be an essential factor in the success of the machine learning model. You need to specify the type of data labeling your organization needs [22]. Labeling accuracy is an important aspect of data labeling [23]. High quality data creates better model performance.

- Data extraction tools (also called data version controls) by managing different versions of data sets and storing them in an accessible and well-organized way [24]. This allows data science teams to gain knowledge, such as identifying how changes affect model performance and understanding how data sets evolve. The most important data preprocessing tools are listed in table I.

 

B. Modeling Tools

- The tools with which we extract features from a raw data set in order to create optimal training data sets are called feature engineering tools. Tools like these have the ability to speed up the feature extraction process [25] when applied for common applications and generic problems.

To monitor the versions of the data of each experiment and its results as well as to compare between different experiments, we use experiment tracking tools, which store all the necessary information about the different experiments because developing machine learning projects involve running multiple experiments with different models, model parameters, or training data. Hyperparameter tuning or optimization tools automate the process of searching and selecting hyperparameters that give optimal performance for machine learning models. Hyperparameters are the parameters of the machine learning models such as the size of a neural network or types of regularization that model developers can adjust to achieve different results [26]. The most important modeling tools are listed in table II.

C. Operationalization Tools

- Then to facilitate the integration of ML models in a production environment, we use machine learning model deployment [27] tools. Machine learning model monitoring is a key aspect of every successful ML project because ML model performance tends to decay after model deployment due to changes in the input data flow over time [28]. Model monitoring tools detect data drifts and anomalies over time and allow setting up alerts in case of performance issues. Finally, we should not forget to mention that at this time there are tools available that cover the life cycle of an endto-end machine learning application. The most important operationalization tools are listed in table III.

 

 

D. The example of colossal companies

- It’s common for big companies to develop their own MLOps platforms in order to deploy fast and successful, reliable and reproducible pipelines. The main problems that led these companies to create their own platforms are mainly two. Initially, the time needed to build and deliver a model in production [29]. The main goal is to reduce the time required, from a few months to a few weeks.

- Also, the stability of ML models in their predictions and the reproduction of these models in different conditions are always two of the most important goals. Some illustrative examples of such companies are : Google with TFX(2019) [30], Uber with Michelangelo(2015) [31], Airbnb with Bighead(2017) [32] and Netflix with Metaflow(2020) [33].

E. How to choose the right tools

- The MLOps life-cycle consists of different tasks. Every task has unique characteristics and the corresponding tools are developing matching with them. Whereat, an efficient MLOps system depends on the choice of the right tools, both for each task and for the connectivity between them. Every challenge also has its own characteristics and the right way to go depends on them [34]. There is not a general recipe one choosing some specific tools [3], but we can provide some general guidelines, that can be helpful at eliminating some tools simplifying this problem. There are tools that offer a variety of functionalities and there are tools that are more specialized.

- Generally, the fewer tools we use the better because it is easier, for example, to archive compatibility between 3 tools than between 5. But there are some tasks that require better flexibility. So the biggest challenge is to find the balance between flexibility and compatibility. For this reason it is important to make a list of the available tools that are capable of solving the individual problem in every task. Then, we can check the compatibility between them in order to find the best way to go. This requires excellent knowledge of as many tools as possible from every team working on a MLOps system. So the list gets smaller when we add as a precondition the pre-existing knowledge of these tools. This is not always a solution, so we can add tools that are easy to understand and use.


Ⅴ. AutoML

- In the last years more and more companies try to integrate machine learning models into the production process. For this reason another software solution was created. AutoML is the process of automating the different tasks that an ML model creation requires [35]. Specifically, AutoML pipeline contains data preparation, models creation, hyper parameter tuning, evaluation and validation. With these techniques a bunch of models is trained in the same data set, then a hyper parameter fine tuning is applied, finally the models are evaluating and the best model is exported. Therefore the process of creating and selecting the appropriate model, as well as the preparation of the data, turns into a much simpler and more accessible process [36]. This is the reason why every year more and more companies turn their attention to AutoML. The combination of AutoML and MLOps simplifies and makes much more feasible the deployment of the ML models in production. In these section we will make a brief introduction into the most modern AutoML tools and platforms aiming at the combination of AutoML and MLOps.

A. Tools and platforms

- Every year more and more tools and platforms are emerging [36]. AutoML platforms are services, which are mainly accessible in the cloud. Therefore, for this task they are not preferred. Although when a cloud based MLOps platform selected, is possible to have better compatibility. There are also libraries and API’s written in python and c++, which are much more preferable when an end-to-end cloud-based MLOps platform has not been chosen. The ones stand out are Auto-Sklearn [37], Auto-Keras [38], TPOT [39], AutoPytorch [40], BigML [41]. The main platforms are Google Cloud AutoML [42], Akkio [43], H2O [44], Microsoft Azure AutoML [45] and Amazon SageMaker Autopilot [46]. The most important tools are listed in table IV

B. Combining MLOps and AutoML

- It is obvious that the combination of the two techniques can be extremely effective [3], but there are still some pros and cons. AutoML requires a vast computational power in order to perform. The development of technological means computational power but every year more power is getting closer and closer to overcome these kind of challenges, but still AutoML will always be more computational expensive compare to classic machine learning techniques, mostly because they perform the same tasks in much more less time.

- Also, we are given much less flexibility. The AutoML tool works as a pipeline and so we have no control over the choices it will make. So AutoML does not qualify for very specialized tasks. On the other hand, with AutoML retraining is a much easier and straightforward task. As long as the new data are labeled or the models use unsupervised techniques, we only have to feed the new data to AutoML tool and deploy the new model. In conclusion, AutoML is a much more quicker and efficient process than the classic ML pipeline [47], which can be extremely beneficial in the achievement of efficient and high maturity level MLOps systems.


Ⅵ. MLOps Challeges

- In the past years, lots of research tends to focus on the maturity levels of MLOps and the transition to fully automated pipelines [13]. Several challenges have been detected in this area and it is not always easy to overcome them [48]. A low maturity level system relies on the classical machine learning techniques and requires an extremely good connection between the individual working teams such as data scientists, ML engineers and frond end engineers. Lots of technical problems arise from this deviation and the lack of compatibility from one step to another. The first challenge lies in the creation of robust efficient pipelines with strong compatibility. Constant evolving is another critical point of a high maturity level of a MLOps platform, thus constant retraining shifts in the top of the current challenges.

A. Efficient Pipelines

- A MLOps system includes various pipelines [49]. Commonly a data manipulation pipeline, a model creation pipeline and a deployment pipeline are mandatory. Each of these pipelines must be compatible with the others, in a way that optimizes flow and minimizes errors. From this aspect it is critical to choose the right tools for the creation and connection of these pipelines. The shape of the targets determines the best combination of tools and techniques, whereat you do not have an ideal combination for each problem, but the problem determines the combination to be chosen. Also, it is always critical to use the same data preprocessing libraries in every pipeline. In this way, we will prevent the rise of multiple compatibility errors.

B. Re-Training

- After monitoring and tracking your model performance, the next step is retraining your machine learning model [50]. The objective is to ensure that the quality of your model in production is up to date. However, even if the pipelines are perfect, there are many problems that complicate or even make retraining impossible. From our point of view, the most important of them is new data manipulation.

 

 1) New Data Manipulation: When a model is deployed in production, we use new, raw data to make the predictions and use them to extract the final results. However, when we are using supervised learning, we do not have at our disposal the corresponding labels. So it is impossible to measure the accuracy and constantly evaluate the model. It is possible to perceive the robustness of the model only by evaluating the final results, which isn’t always an option. Even if we manage to evaluate the model and find low metrics at new data, the same problem arises again. In order to retrain (fine tune) the model, the labels are prerequisites. Manually labeling the new data is a solution but slows down the process and fails at constant retraining tasks. An approach is using the trained model to label the new data or use unsupervised learning instead of supervised learning but also relies on the type of the problem and the targets of the task. Finally, there are types of data where there is no need for labeling. The most common area that uses this kind of data is time series and forecasting.

C. Monitoring

In most papers and articles, monitoring is positioned as one of the most important functions in MLOps [51]. This is because to understand the results helps understanding the lack of the entire system. The last section shows the importance of monitoring not only for the accuracy of the model, but for every aspect of the system.

 

 1) Data monitoring: Monitoring the data can be extremely useful in many ways. Detection of outliers and drift is a way to prevent a failure of the model and help the right training. Constant monitoring of the shape of the data is always opposed to training data it is away. There are lots of tools and techniques for data monitoring and choosing the right ones also depends on the target.

 

 2) Model Monitoring: Monitoring the accuracy of a model is a way to evaluate the performance in a bunch of data at a precise moment. For a high maturity level system, we need to monitor more aspects of our model and the whole system. In the previous years, lots of research [4][5] is focused on sustainability, robustness [52], fairness, and explainability [53]. The reason is that we need to know more about the structure of the model, the performance, the reason why it works or it doesn’t.


Ⅶ. Conclusion.

In conclusion, MLOps is the most efficient way to incorporate ML models in production. Every year more enterprises use these techniques and more research has been made in the area. But MLOps maybe has a different usage. In addition to the application of ML models in production, a fully mature MLOps system with continuous training can lead us to more efficient and realistic ML models. Further, choosing the right tools for each job is a constant challenge. Although there are many papers and articles for the different tools it is not easy to follow the guidelines and incorporate them in the most efficient way. Sometimes we have to choose between flexibility and robustness with the respective pros and cons. Finally, monitoring is a stage that must be one of the main points of interest. Monitoring the state of the whole system using sustainability, robustness, fairness, and explainability is from our point of view the key for mature, automated, robust and efficient MLOps systems. For this reason, it is essential to develop model and techniques which enables this kind of monitoring such as explainable machine learning models. AutoML is maybe the game changer in the maturity and efficiency chase. For this reason, a more comprehensive and practical survey for the usage of AutoML in MLOps is necessary.


 

My Opinions : 

공포, 초능력, 막연함과 환상 등 사회전반에서 큰 기대를 품고 있는 AI라는 존재를 아주 효율적인 도구로 생각해보자. 도구는 생산성이고, 그렇다면 결국 Planning과 Strategy의 관점에서 바라본다면 도입예산, 기대효과, 비용대비효과, 유지보수 비용, 비용절감, 더 높은 퍼포먼스 등을 고려하게 된다.

마이클 포터의 가치사슬 이론

가치사슬 이론은 결과적으로 그 제품의 가격과 이윤비율이 결정되기 까지 어떤 비용이 수반되는가에 관한 분석프레임이다.


여기에 기반한다면
- 내가 경험한 AI는 한번 모델을 생산하는데 수개월의 시간이 걸리며, 그와 동시에 수학적 역량이 필요하여 기본적으로 높은 인건비를 소요하는 researcher, engineer의 개인의 역량에 의존하는 경향이 강하다. 즉 초기비용이 높다.
- AI 자체적으로는 Drift가 발생한다. Concept, Data, prediction, label, Feature 등 각 변화로 인해 성능이 떨어진다. 그렇기 때문에 re-train 이라는 유지보수의 비용도 높다.

https://data-newbie.tistory.com/792

 

진행중) Model drift 자료 정리

목차 Definition 모델은 생성된 시간때의 변수와 매개 변수를 기반으로 최적화되기 때문에 이는 기계 학습 모델에 문제를 제기합니다. 기계 학습 모델을 개발하는 동안 이루어진 공통적이고 때로

data-newbie.tistory.com

- 기술적으로는 Raw Data의 preprocessing부터 dataset selection, training, evaluation, tuning, deployment 등 각기 다른 영역의 일련의 과정을 수행해야한다.

- AI는 기술적 난이도와 인력난, 기술개발의 초기 단계로 너무나도 값비싸다.

 

- AI의 과제는 비용들을 후려치는 것이다. 이를테면 기가팩토리라는 단순최적화를 통해 규모의경제 달성으로 비용절감하여 전기차의 새로운 시장을 연 테슬라, 재활용 위성로켓을 개발하여 비용절감을 통한 민간 우주시대를 연 스페이스X, OS를 보급하여 Personnel Computer를 만든 MS 등 값비싸지만 인류사회에 필요한 것의 '민중화'와 그를 통한 '민주화'를 달성해야 한다. AI는 결국 도구이기 때문에 그 비용절감을 달성한 회사가 마켓쉐어를 가져가며, 그것을 달성한 회사가 세상을 바꾼다.

 

- 가격도 가격이지만, 충분히 Friendly, Easy, General 해야 한다. 그 동안의 프로세서 회사들과 다르게 엔비디아가 승리한 이유는 단순히 하드웨어만 팔아서가 아닌, 그 하드웨어가 gpu로서 돌아갈 수 있도록 충분히 지원을 한 CUDA 때문이라고 생각한다. 또한 에플이 아이폰으로 세상을 바꾼 이유도 셀폰, GPS, MP3라는 각각의 디바이스에 담겨있어야 할 기능들을 모두 수용할 수 있는 제품개념과 OS를 개발하였기 때문이다. 이는 단순히 가격 이외에도 접근성, 난이도 측면에서도 혁신이 필요하다는 것이다.

 

- 이러한 점에서 Model pipeline으로서 AutoML과 Systemic Sustainability로서의 MLOps의 결합은 AI에 대한 진입장벽을 제거하고, 생산성을 높힐 수 있는 매우 좋은 방법이라고 생각하다. 그리고 최근에는 GUI에 의해 연구 가능한 Interactive AI인 MLOps가 등장하기 시작했다고 한다. Upstage, SNUAILAB 등 이라든가. GUI를 통해 AI개발이 가능해진다면 개발인력이 Expertise에서 중급 인력 수준으로 내려감과, 지속적인 Ops가 가능해지니 비용을 매우 낮출 수 있다.

 

- 도구는 완전 무인화가 아닌 이상 인간 최적화를 향해 진화한다. 대부분은 인간은 별거없기 때문에 쉽고 간단하고 싼 걸 좋아한다. 차원이 다른 개념으로 압도적인 생산성을 달성해야만 경쟁력을 갖출 것이다.

반응형

+ Recent posts