Original link: https://hijiangtao.github.io/2022/07/07/Practice-of-Project-Development-and-Auto-Workflow/
With the development of the front-end, more and more tool libraries and methods are used in the daily R&D process, which greatly improves the efficiency of business development. With the construction of various automated processes, developers no longer need to pay attention to every detail. Some time ago, the project was delivered in stages, and many attempts were made in the process of advancement. Although in the long run, this kind of work may eventually converge to the infrastructure department or standard automated processes, but it does not prevent me from passing Practice to implement some thoughts and ideas about project development.
If you are an experienced developer, you can skip directly to the end of the article, the “Summary” chapter has a concise description of the full text.
Next, let me share some automation and efficiency practices I tried in project development. The UI framework involved in the writing of this article is mainly based on the React project generated by create-react-app, and will be supplemented by some other framework solutions to illustrate.
The first step of a new project: scaffolding
If your project selection is Angular, then there are not many choices and you can go directly to Angular CLI; if it is React or Vue, then there will be a lot of scaffolding to choose from, and many domestic developers have different open source solutions. Among them, most of the solutions will have some bindings of component libraries and open source libraries. If you want to build a more free framework, the official scaffolding create-create-app (CRA) will definitely be the first choice.
We can quickly create a TypeScript project through CRA by executing the following command through npx:
npx create - react - app project -- template typescript
However, with the development of CRA, the official scaffolding has also encapsulated more and more details that were originally exposed in the template into the react-scripts package. For example, if you want to modify the webpack construction process of the project, you cannot get started directly with the official template. .
The official template provides the npm run eject
command. Executing this command will “eject” a series of hidden configuration files and some dependencies into the project, and then you can fully control the addition and deletion by yourself, but this operation is irreversible. After the configuration file is “popped”, you will not be able to follow the official steps to upgrade the react-script version used by the project.
Well, if you want a solution that can override the configuration, but stay in sync with the official version, you can try craco https://github.com/dilanx/craco
Style isolation scheme: CSS Modules
This part continues above. This may be a well-known option that is enabled by default. Its origin also lies in the isolation requirements for component styles. If your project is a CRA official, it has actually supported the enablement of CSS Modules. Name you according to the specified format. style file, CRA can automatically parse and process the style.
But as mentioned above, if you want to make some modifications to the style loader, for example, the default parsing method does not support camel case naming of style variables. To achieve this, you can write this in the configuration craco.config.js after integrating craco, and expand the css loader configuration options:
module . exports = { style : { css : { mode : " extends " , loaderOptions : { modules : { auto : true , exportLocalsConvention : ' camelCaseOnly ' , }, }, }, }, };
Of course, when we reference CSS Modules variables in TypeScript files, since TypeScript does not know the contents of files other than .ts and .tsx files, in order to prevent the IDE from reporting errors in syntax checking, we also need to declare the environment for a specific file suffix variable. For CRA new projects, you can simply create a react-app-env.d.ts file to supplement the following instructions:
/// <reference types="node" /> /// <reference types="react" /> /// <reference types="react-dom" /> /// <reference types="react-scripts" /> declare module ' *.module.css ' { const classes : { readonly [ key : string ]: string }; export default classes ; } declare module ' *.module.scss ' { const classes : { readonly [ key : string ]: string }; export default classes ; } declare module ' *.module.sass ' { const classes : { readonly [ key : string ]: string }; export default classes ; }
In addition, through craco, you can also modify some other content, such as configuring babel not to translate ES6 modules, modifying the packaging path, customizing the hot update scheme, and so on.
ESLINT code inspection: sharing two custom lint scenarios
In order to ensure a consistent code style, such as avoiding the use of var
in the project, we can introduce eslint rules to standardize the submitted code. Of course, most scaffolding today should integrate eslint for you when creating a new project, either by .eslintrc.json, .eslintrc.js, or by adding the eslintConfig property directly in package.json.
In addition to enabling the default rule set, in the development process, in order to make the project more efficient to promote collaborative development among multiple people, there must be many details that need to be dealt with. Here I will briefly share two examples.
The first point is that when we do some additions and deletions to the code, it may generate redundant import declarations. At this time, we certainly hope that it will be deleted when submitting.
In eslint, plugins can expose additional rules for use. If you want to achieve the purpose we just said, you can introduce the unused-imports plugin to check the es6 imports at this time (such as error prompts for unused import declarations):
{ " plugins " : [ " unused-imports " ], " rules " : { " no-unused-vars " : " off " , " unused-imports/no-unused-imports " : " error " , " unused-imports/no-unused-vars " : [ " warn " , { " vars " : " all " , " varsIgnorePattern " : " ^_ " , " args " : " after-used " , " argsIgnorePattern " : " ^_ " } ] } }
With the IDE or IDE plugin, we can also prepend to perform cleanup of import references on every save.
Another type of problem that is prone to occur in multi-person collaboration is code conflict. When multiple people modify a file at the same time, new modules may be introduced at the same time. If the code is not managed in a unified style, it is easy to cause “import conflicts”. In order to solve this problem, the simple-import-sort plugin can be introduced. The plugin also supports sorting of exports:
{ " plugins " : [ " simple-import-sort " ], " rules " : { " simple-import-sort/imports " : " error " , " simple-import-sort/exports " : " error " } }
In this way, the code inspection and repair can be completed by executing eslint —fix once before the code is submitted. We will introduce this part in detail in the following chapter on git hook.
Others are related to the framework. For example, starting from React 17, we can configure eslint rules to avoid introducing React into the code. For the implementation of this part, please refer to the official React guide . The following is a simple before and after code comparison :
// 以前import React from ' react ' ; function App () { return < h1 > Hello World </ h1 >; } // 现在function App () { return < h1 > Hello World </ h1 >; }
Automated environment configuration: git hook hook definition
husky is a tool that adds hooks to the git client and can be used to configure our local automation environment. We can install husky and customize the git hooks we need and the specific tasks we need to perform, so that when we perform git operations on the code, we can check and process the code at a specific time. Common hooks are as follows:
- commit-msg – commit message hook, triggered when git commit or git merge is executed
- pre-commit – pre-commit hook, triggered when git commit is executed
The following is a piece of husky installation and initialization code:
npm install husky - D npx husky - init && npm install // 添加任务一条commit-msg 钩子npx husky add . husky / commit - msg ' ./node_modules/.bin/commitlint --from=HEAD~1 '
For example, we can combine the commitlint tool to check and process the commit information in the commit-msg stage (as shown in the above code), or format the code to be submitted in the pre-commit stage.
Custom command invocation: code style unification and commit information specification
Not only for the development code, but also for the commit information, the various writing styles are also very unfavorable for reading and maintenance. For example, when we need to browse the historical commits to locate the code changes caused by the specific commits, this depends on the specification of the commit information. Using commitlint can help us achieve this.
Commitlint needs to be used together with husky. Specifically, husky is used to ensure the command configuration for specific hooks, and commitlint is used to ensure that the command execution can perform specific checks on the commit msg information. A simple commitlint installation and configuration is as follows:
#安装npm install -- save - dev @ commitlint /{config-conventional,cli } #配置echo " module.exports = {extends: ['@commitlint/config-conventional']} " > commitlint . config . js
After configuring commitlint, if the default configuration is used, the commit operation can be executed normally only when our commit msg complies with the following specifications, otherwise it will be interrupted (where scope is optional):
< type > ( < scope > ): description
If custom rules are required, we need to change the commitlint.config.js file and add rules to it. For this part, please refer to the official documentation https://commitlint.js.org/ .
In addition, after adding the commitlint configuration file, we may see the IDE’s inspection of this file marked in red, because we do not pay attention to the format of the configuration file during the project construction (because it is just a simple configuration of commitlint) , so we can ignore it in the eslint configuration file. Likewise, if there are other files belonging to a similar location, you can also add them together:
{ " ignorePatterns " : [ " commitlint.config.js " ] }
After talking about the commit specification, our code format itself also needs to be standardized. For example, there will be rules for variable naming and sorting for TypeScript code, and rules for indentation and tag closure for html templates. These can also be implemented using tools combined with git hooks. , here, we introduce the linter tool: lint-staged.
lint-staged is a tool that runs linters on git staging files, and customizes various file formatting operations through lint-staged (mainly guaranteed by eslint and prettier). Combined with the pre-commit git hook, we can execute lint-staged when this hook is triggered to complete the formatting of the corresponding file. How to make the tool differentiate processing for different types of files? This can be achieved through configuration, that is to declare the lint-staged field in pakage.json:
{ ..., " lint-staged " : { " *.{js,jsx,ts,tsx} " : [ " eslint -c ./.eslintrc.json --fix " ], " (*.json|.eslintrc|.prettierrc) " : [ " jsonlint --in-place " ], " *.{s,}css " : [ " prettier --write " ], " *.{html,md} " : [ " prettier --write " ] } }
The above code shows that lint-staged can perform eslint processing for JavaScript/TypeScript class files, jsonlint processing for json class files, and prettier processing for both style files and html files.
Local development agent environment
From the perspective of local development, one of the functions that needs to be improved the most is to forward API requests to avoid possible CORS errors, or to bypass some request restrictions, in the form of changing header information for specific requests, or Change the actual requested domain name and path for a specific request. But nothing more than the need to establish a local proxy environment. When the front-end project is developed locally, we can enhance the capabilities of the local dev server through a library called http-proxy-middleware .
Through the documentation, we know that we can construct a middleware for Node.js projects by calling the createProxyMiddleware API. For express applications, we can configure the proxy server middleware through the following configuration:
import * as express from ' express ' ; import { createProxyMiddleware } from ' http-proxy-middleware ' ; const app = express (); app . use ( ' /api ' , createProxyMiddleware ({ target : ' http://www.example.org/api ' , changeOrigin : true , }) ); app . listen ( 3000 );
For projects built through CRA, since CRA has done some encapsulation work, we no longer need to explicitly start a Node.js application during local development, and the above-mentioned call to the proxy middleware can also be specified in The file is written in accordance with the specification (the file name and method signature need to be standardized, which are not listed here, and are described in the official CRA document).
Code reuse: component templates and code snippets
When we create a new project, we will find scaffolding to build a suitable shelf for us, and then we will fill it with components, pages, services, etc., and for finer-grained code, we can also consider improving through code reuse. Our development efficiency, the scenarios here can be mainly divided into two categories:
- Code snippets class code snippets
- Component-granular file code
For the first type of scenario, many plug-ins we installed in the IDE have achieved this goal for us. When we type a few letters in a certain type of file and hit enter, we can quickly generate a code snippet. For example, when we use RxJS, we often need to define BehaviorSubject and its getter & setter. If we enter bgs
, the following code will appear, which can improve our efficiency when writing some initialization code:
/* TODO: 数据流定义*/ behaviorName$ = new BehaviorSubject < string > ( initialValue ); get behaviorName () { return this . behaviorName$ ; } set behaviorName ( value : string ) { this . behaviorName$ . next ( value ); }
And what we have to do is to connect bgs
with the above code (and specific placeholders), the implementation scheme can be a VS Code plugin, for example, I wrote one before https://marketplace.visualstudio.com/items?itemName= hijiangtao.tutor-code-snippets can also be a WebStorm plugin or live templates.
For the second type of scenario, what we often need is a set of templates, styles and TypeScript logic code that follow our naming conventions under the specified path. For example, in an Angular project, when we create a new component, we need to generate HTML, JavaScript, CSS files at the same time and update the nearest *.module.ts
file:
CREATE projects / src / app / demo / demo . component . css ( 0 bytes ) CREATE projects / src / app / demo / demo . component . html ( 19 bytes ) CREATE projects / src / app / demo / demo . component . spec . ts ( 612 bytes ) CREATE projects / src / app / demo / demo . component . ts ( 267 bytes ) UPDATE projects / src / app / app . module . ts ( 1723 bytes )
In Angular projects, we can easily achieve this through Angular schematics; for React or Vue projects, there are also many implementation solutions in the community, such as generate-react-cli .
If you want to realize the tools required for the second type of scenario without resorting to the community or official solutions, you can also develop an npm package and expose the corresponding bin script, and then execute it through npx to achieve the goal. For the introduction of npx, please refer to what I wrote before. A blog post “Record the usage scenarios of npx” .
Release: Release and CHANGELOG Automation
Whenever we need to launch a new requirement, in order to better record the changes, we generally need to issue a version and record some CHANGELOGs at the same time. If we can fully automate this part of the work, it will undoubtedly improve our project specifications and release. efficiency. Here is an example of standard-version. We introduce standard-version to standardize automatic tagging and CHANGELOG generation. Specifically, we expect to achieve the following goals through standard-version:
- Automatically upgrade and tag versions of different levels (major, minor, patch) of the project according to the specified rules
- Comparing historical commits to automatically generate readable and categorized CHANGELOG logs between different versions
Through configuration, we can also rename the type field in the commit information mentioned above to be displayed in the title of the CHANGELOG. The following is an example configuration:
module . exports = { " types " : [ { " type " : " feat " , " section " : " Features " }, { " type " : " fix " , " section " : " Bug Fixes " }, { " type " : " test " , " section " : " Tests " }, { " type " : " doc " , " section " : " Document " }, { " type " : " build " , " section " : " Build System " }, { " type " : " ci " , " hidden " : true } ] }
The following is an example of an automatically generated CHANGELOG:
# Changelog All notable changes to this project will be documented in this file. See [ standard-version ]( https://github.com/conventional-changelog/standard-version ) for commit guidelines. ## [1.11.0](https://git.woa.com/test/project/compare/v1.10.1...v1.11.0) (2022-06-28) ### Features - 测试feature 提交@hijiangtao (merge request !65) ( [ a091125 ]( https://git.woa.com/test/project/commit/xxx ) ) ### [1.10.1](https://git.woa.com/test/project/compare/v1.10.0...v1.10.1) (2022-06-24) ### Bug Fixes - 修改XXX,并新增YYY (merge request !63) ( [ e3a01ce ]( https://git.woa.com/test/project/commit/yyy ) )
When using standard-version, the default configuration can meet most scenarios, but more fine-grained control still requires us to modify the command or configuration. For example, when we specify the repository url in package.json, we can specify the corresponding user through the @ symbol in the commit information, and refer to the corresponding issue/PR through #, which will be converted into the corresponding link address when generating CHANGELOG The hyperlinks, such as the following application scenarios and precautions involved in the automatic generation of the version I am developing:
// 首次执行(不变更版本) standard - version - a -- -- first - release // 但是如果项目版本不符合规范,还是需要手动发布,因为需要保证项目从v1.0.0 开始// https://github.com/conventional-changelog/standard-version/issues/131 standard - version - a -- -- release - as 1.0 . 0 // 在package.json 中确保项目正确配置repository 对象repository . url // 带通知具体user 的commit-msg git commit - m " fix: bug produced by @timojiang " // 带issue 的commit-msg git commit - m " fix: implement functionality discussed in issue #2 "
Summarize
This article starts with the selection of scaffolding for project initialization, style isolation, lint rules and git hooks, and then to template tools and automated version logs. It introduces some of my practice and thinking in the development process, and aims to point out the process of a project from code initialization to delivery online. Among the different processes that may be involved, there are different submission efficiencies and automated processes. Due to the length of the article, there is no detailed API introduction for all the processes and tools involved, but you can still focus on the various processes and tools mentioned in it. Class keywords are searched and viewed on the Internet.
When writing this article, we try to use some universal descriptions for different processes to ensure that the provided solutions are still available for a long time after replacing the specific tool library, and prevent the content of the article from being affected by the timeliness of the tools.
From the full text, there are mainly the following practical conclusions:
- In the choice of scaffolding, you can use community solutions such as create-react-app or umi, but if you want more flexible scaffolding, when you use CRA, you can also consider craco;
- Needless to say about the role of CSS Modules, but also need to pay attention to TypeScript inspection and compatibility in your way of naming CSS variables;
- ESLINT is now standard for most projects. If your project involves multi-person collaboration, you can configure some additional plugins to help keep the style consistent and reduce code merge conflicts. Of course, in terms of unified code style, you can also customize git hooks and specific tasks to be performed through husky, such as:
- commit-msg
- pre-commit
- Among them, lint-staged is used to customize various file formatting operations at the pre-commit time (mainly using eslint and prettier to ensure execution), and commitlint is used to ensure that the commit msg information conforms to the specification.
- A local development agent environment is a must for many teams, and you can choose a middleware to enhance your dev server.
- We can also consider improving our development efficiency through code reuse. The scenarios here can be mainly divided into two categories: code snippets and component-level file modifications.
- Whenever a project goes online, it is necessary to release and record log changes in terms of specifications. We can introduce standard-version to standardize automatic tagging and CHANGELOG generation.
This article is reprinted from: https://hijiangtao.github.io/2022/07/07/Practice-of-Project-Development-and-Auto-Workflow/
This site is for inclusion only, and the copyright belongs to the original author.