{"@attributes":{"version":"2.0"},"channel":{"title":"Zhe Zhao","description":"Much-Worse Jekyll theme for academic page and blog","link":"https:\/\/persistz.github.io\/\/","pubDate":"Wed, 23 Oct 2024 16:16:22 +0000","lastBuildDate":"Wed, 23 Oct 2024 16:16:22 +0000","generator":"Jekyll v3.10.0","item":[{"title":"MLSec 2022 Write-up","description":"<p>Here is the team suibianwanwan, from ShanghaiTech University and Singapore Management University, I\u2019m very glad to win a prize in this year\u2019s MLSec competition. The track I participated in was Face Recognition Evasion. In this competition, we got a near perfect score using only a few thousand queries (2k-5k queries), which is 1% of the other TOP teams.<\/p>\n\n<p>The background of the competition is as follows:<\/p>\n\n<blockquote>\n  <p>An internet company wants to reinvent the experience for its website audience and use their faces instead of passwords. <br \/>\nTo implement this visionary idea, the company\u2019s data scientists have built a model to recognize user faces for authentication. <br \/>\nThe internet isn\u2019t always safe, so their AI Red Team implemented some hardening techniques after adversarial testing. <br \/>\nBefore the official model rollout, the internet company requested some help from AI and cybersecurity communities.<\/p>\n<\/blockquote>\n\n<p>Specifically, there are two metrics in the competition, which are <strong>confidence<\/strong> and <strong>stealthiness<\/strong>, \nand the goal is to maximize both. \nThe most important evaluation metric is confidence, which \nis the probability that the model predicts the input sample to be the target class. \nTherefore we need to generate face adversarial examples with targets to make the neural network misclassify.<\/p>\n\n<p>More details can be found <a href=\"https:\/\/github.com\/drhyrum\/2022-machine-learning-security-evasion-competition\/tree\/main\/biometric\">here<\/a>.<\/p>\n\n<p>Essentially, the competition is a black-box adversarial attack on the face recognition neural network.\nSo the main algorithm we use is based on model ensemble BIM[1] or PGD[2] attack. On this baseline algorithm, we have made several optimizations to this attack.<\/p>\n\n<h2 id=\"basic-attack\">Basic attack<\/h2>\n<p>Let\u2019s explain the basic version of the algorithm first.<\/p>\n\n<p>The 1st step is manual image stitching to get the starting point.\nA traditional $L_{\\infty}$ attack generates adversarial perturbations across the whole picture, but in this competition, this kind of perturbations hardly pose a threat to the face recognition system in thiscompetition, so we selected image stitching.<\/p>\n\n<div align=\"center\">\n&lt;img src=.\/figure1.png width=200 height=200 \/&gt;\n<\/div>\n\n<p>Figure 1. Left is the normal $L_{\\infty}$ adversarial example, right is the start point generated by image stitching.<\/p>\n\n<p>The second step is the model ensemble BIM or PGD, where we use cosine similarity as the loss function. The main flow of the algorithm is as follows.<\/p>\n\n<div class=\"language-python highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code><span class=\"c1\"># The following algorithm is a single iteration process, which requires multiple iterations in PGD or BIM\n<\/span><span class=\"k\">for<\/span> <span class=\"n\">i<\/span> <span class=\"ow\">in<\/span> <span class=\"nb\">range<\/span><span class=\"p\">(<\/span><span class=\"nb\">len<\/span><span class=\"p\">(<\/span><span class=\"bp\">self<\/span><span class=\"p\">.<\/span><span class=\"n\">models<\/span><span class=\"p\">)):<\/span>\n    <span class=\"n\">ori_feat<\/span> <span class=\"o\">=<\/span> <span class=\"bp\">self<\/span><span class=\"p\">.<\/span><span class=\"n\">models<\/span><span class=\"p\">[<\/span><span class=\"n\">i<\/span><span class=\"p\">](<\/span><span class=\"n\">ori_img<\/span><span class=\"p\">)<\/span>\n    <span class=\"n\">target_feat<\/span> <span class=\"o\">=<\/span> <span class=\"bp\">self<\/span><span class=\"p\">.<\/span><span class=\"n\">models<\/span><span class=\"p\">[<\/span><span class=\"n\">i<\/span><span class=\"p\">](<\/span><span class=\"n\">target_img<\/span><span class=\"p\">)<\/span>\n    <span class=\"n\">loss_cos<\/span> <span class=\"o\">=<\/span> <span class=\"n\">pair_cos_dist<\/span><span class=\"p\">(<\/span><span class=\"n\">target_feat<\/span><span class=\"p\">,<\/span> <span class=\"n\">ori_feat<\/span><span class=\"p\">)<\/span>\n    <span class=\"n\">loss_cos<\/span><span class=\"p\">.<\/span><span class=\"n\">backward<\/span><span class=\"p\">()<\/span>\n    \n    <span class=\"n\">grads<\/span> <span class=\"o\">=<\/span> <span class=\"n\">ori_img<\/span><span class=\"p\">.<\/span><span class=\"n\">grad<\/span><span class=\"p\">.<\/span><span class=\"n\">data<\/span>\n    <span class=\"n\">ori_img<\/span> <span class=\"o\">-=<\/span> <span class=\"bp\">self<\/span><span class=\"p\">.<\/span><span class=\"n\">step_size<\/span> <span class=\"o\">*<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"n\">sign<\/span><span class=\"p\">(<\/span><span class=\"n\">grads<\/span><span class=\"p\">)<\/span>\n    <span class=\"n\">ori_img<\/span> <span class=\"o\">=<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"n\">clamp<\/span><span class=\"p\">(<\/span><span class=\"n\">ori_img<\/span><span class=\"p\">,<\/span> <span class=\"nb\">min<\/span><span class=\"o\">=<\/span><span class=\"n\">min_<\/span><span class=\"p\">,<\/span> <span class=\"nb\">max<\/span><span class=\"o\">=<\/span><span class=\"n\">max_<\/span><span class=\"p\">)<\/span>\n<\/code><\/pre><\/div><\/div>\n\n<p>After performing the basic attack, we scored around 89.998105.<\/p>\n\n<h2 id=\"optimization\">Optimization<\/h2>\n\n<p>To increase the transferability of the black-box adversarial attack, EOT[3] or DI[4] is the widely used optimization. The core of these methods is to do as much perturbation to the image before computing the gradient of adversarial loss, which results in better robustness of the gradient and the generated adversarial examples.<\/p>\n\n<div class=\"language-python highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code><span class=\"k\">def<\/span> <span class=\"nf\">eot_attack<\/span><span class=\"p\">(<\/span><span class=\"n\">model<\/span><span class=\"p\">,<\/span> <span class=\"n\">x<\/span><span class=\"p\">,<\/span> <span class=\"n\">y<\/span><span class=\"p\">,<\/span> <span class=\"n\">eps<\/span><span class=\"p\">,<\/span> <span class=\"n\">alpha<\/span><span class=\"p\">,<\/span> <span class=\"n\">steps<\/span><span class=\"p\">,<\/span> <span class=\"n\">targeted<\/span><span class=\"o\">=<\/span><span class=\"bp\">False<\/span><span class=\"p\">):<\/span>\n    <span class=\"n\">x<\/span><span class=\"p\">.<\/span><span class=\"n\">requires_grad_<\/span><span class=\"p\">(<\/span><span class=\"bp\">True<\/span><span class=\"p\">)<\/span>\n    <span class=\"k\">for<\/span> <span class=\"n\">i<\/span> <span class=\"ow\">in<\/span> <span class=\"nb\">range<\/span><span class=\"p\">(<\/span><span class=\"n\">steps<\/span><span class=\"p\">):<\/span>\n        <span class=\"c1\"># transformation on input x\n<\/span>        <span class=\"n\">x<\/span> <span class=\"o\">=<\/span> <span class=\"n\">x<\/span> <span class=\"o\">+<\/span> <span class=\"n\">random_noise<\/span>\n        <span class=\"n\">x<\/span> <span class=\"o\">=<\/span> <span class=\"n\">random_rotation<\/span><span class=\"p\">(<\/span><span class=\"n\">x<\/span><span class=\"p\">)<\/span>\n        <span class=\"n\">x<\/span> <span class=\"o\">=<\/span> <span class=\"n\">random_scale<\/span><span class=\"p\">(<\/span><span class=\"n\">x<\/span><span class=\"p\">)<\/span>\n\n        <span class=\"n\">loss<\/span> <span class=\"o\">=<\/span> <span class=\"n\">F<\/span><span class=\"p\">.<\/span><span class=\"n\">cross_entropy<\/span><span class=\"p\">(<\/span><span class=\"n\">model<\/span><span class=\"p\">(<\/span><span class=\"n\">x<\/span><span class=\"p\">),<\/span> <span class=\"n\">y<\/span><span class=\"p\">)<\/span>\n        <span class=\"k\">if<\/span> <span class=\"n\">targeted<\/span><span class=\"p\">:<\/span>\n            <span class=\"n\">loss<\/span> <span class=\"o\">=<\/span> <span class=\"o\">-<\/span><span class=\"n\">loss<\/span>\n        <span class=\"n\">loss<\/span><span class=\"p\">.<\/span><span class=\"n\">backward<\/span><span class=\"p\">()<\/span>\n        <span class=\"n\">x<\/span> <span class=\"o\">=<\/span> <span class=\"n\">x<\/span> <span class=\"o\">+<\/span> <span class=\"n\">alpha<\/span> <span class=\"o\">*<\/span> <span class=\"n\">x<\/span><span class=\"p\">.<\/span><span class=\"n\">grad<\/span><span class=\"p\">.<\/span><span class=\"n\">sign<\/span><span class=\"p\">()<\/span>\n        <span class=\"n\">x<\/span> <span class=\"o\">=<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"nb\">min<\/span><span class=\"p\">(<\/span><span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"nb\">max<\/span><span class=\"p\">(<\/span><span class=\"n\">x<\/span><span class=\"p\">,<\/span> <span class=\"n\">x<\/span> <span class=\"o\">-<\/span> <span class=\"n\">eps<\/span><span class=\"p\">),<\/span> <span class=\"n\">x<\/span> <span class=\"o\">+<\/span> <span class=\"n\">eps<\/span><span class=\"p\">)<\/span>\n        <span class=\"n\">x<\/span> <span class=\"o\">=<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"n\">clamp<\/span><span class=\"p\">(<\/span><span class=\"n\">x<\/span><span class=\"p\">,<\/span> <span class=\"n\">_min<\/span><span class=\"p\">,<\/span> <span class=\"n\">_max<\/span><span class=\"p\">)<\/span>\n        <span class=\"n\">x<\/span> <span class=\"o\">=<\/span> <span class=\"n\">x<\/span><span class=\"p\">.<\/span><span class=\"n\">detach<\/span><span class=\"p\">().<\/span><span class=\"n\">requires_grad_<\/span><span class=\"p\">(<\/span><span class=\"bp\">True<\/span><span class=\"p\">)<\/span>\n    <span class=\"k\">return<\/span> <span class=\"n\">x<\/span>\n<\/code><\/pre><\/div><\/div>\n<p>In the above code, I used random noise, rotate and scale methods to increase the transferability of the adversarial examples, transformations such as crop can also be used. \nBut we need to consider the nature of images in face recognition, for example, pictures are usually taken in standing position, so the angle of rotate needs to be strictly limited, and we do not add the random crop transformation, which may not be able to crop the complete face.<\/p>\n\n<p>The next aspect that could be optimized is the loss function. In addition to cos similarity, \nI also use loss functions that decrease both confidence and stealthiness. We call it by di-optimization.<\/p>\n\n<p>Specifically, I design two separate sets of loss functions for confidence (cos similarity) and stealthiness (SSIM). In the optimization process, our method only consider the directions that allow both loss functions to be reduced, i.e., we only optimize those pixels that can make confidence and stealthiness decrease at the same time, so that we can get a better starting point for subsequent the attack.<\/p>\n\n<p>We implement this by computing the gradient sign. The code is as follows:<\/p>\n\n<div class=\"language-python highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code><span class=\"c1\"># Use multiplication instead of NOR\n<\/span><span class=\"n\">ensmeble_grads<\/span> <span class=\"o\">=<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"n\">sign<\/span><span class=\"p\">(<\/span><span class=\"n\">grads_cos<\/span><span class=\"p\">)<\/span> <span class=\"o\">*<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"n\">sign<\/span><span class=\"p\">(<\/span><span class=\"n\">grad_ssim<\/span><span class=\"p\">)<\/span>\n<span class=\"n\">ensmeble_grads<\/span> <span class=\"o\">=<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"n\">where<\/span><span class=\"p\">(<\/span><span class=\"n\">ensmeble_grads<\/span> <span class=\"o\">&gt;<\/span> <span class=\"mi\">0<\/span><span class=\"p\">,<\/span> <span class=\"mi\">1<\/span><span class=\"p\">,<\/span> <span class=\"mi\">0<\/span><span class=\"p\">)<\/span> <span class=\"o\">*<\/span> <span class=\"n\">grads_cos<\/span>\n<span class=\"n\">x<\/span> <span class=\"o\">=<\/span> <span class=\"n\">x<\/span> <span class=\"o\">+<\/span> <span class=\"n\">alpha<\/span> <span class=\"o\">*<\/span> <span class=\"n\">ensmeble_grads<\/span><span class=\"p\">.<\/span><span class=\"n\">sign<\/span><span class=\"p\">()<\/span>\n<\/code><\/pre><\/div><\/div>\n\n<div align=\"center\">\n&lt;img src=.\/figure2.png width=200 height=200 \/&gt;\n<\/div>\n\n<p>Figure 2. Close-up of the local perturbation after di-optimization.<\/p>\n\n<p>By filling the above steps, we used about 2000 queries and got 89.998646 score.<\/p>\n\n<h2 id=\"tricks-in-the-details\">Tricks in the details<\/h2>\n\n<p>In the process of EOT-PGD, we use several small tricks\nto further enhance the transferbility of the adversarial examples.<\/p>\n\n<p>The first one is resize the gradient to enhance the size and stability of the perturbation, so that pixels within a small size share the same gradient and alleviate the effect of different preprocessing and cropping of the black-box model. We call this method a mosaic gradient.<\/p>\n\n<p>The second point is that after the normal EOT transformation, we directly apply some minor perturbations to the features obtained from the normal image.<\/p>\n\n<div class=\"language-python highlighter-rouge\"><div class=\"highlight\"><pre class=\"highlight\"><code><span class=\"c1\"># Features perturbation, 5%\n<\/span><span class=\"n\">feat_range<\/span> <span class=\"o\">=<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"nb\">max<\/span><span class=\"p\">(<\/span><span class=\"n\">ori_feat<\/span><span class=\"p\">)<\/span> <span class=\"o\">-<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"nb\">min<\/span><span class=\"p\">(<\/span><span class=\"n\">ori_feat<\/span><span class=\"p\">)<\/span>\n<span class=\"n\">feat_scale<\/span> <span class=\"o\">=<\/span> <span class=\"p\">(<\/span><span class=\"n\">feat_range<\/span> <span class=\"o\">*<\/span> <span class=\"mf\">0.05<\/span><span class=\"p\">).<\/span><span class=\"n\">item<\/span><span class=\"p\">()<\/span>\n<span class=\"n\">ori_feat<\/span> <span class=\"o\">+=<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"n\">empty_like<\/span><span class=\"p\">(<\/span><span class=\"n\">ori_feat<\/span><span class=\"p\">).<\/span><span class=\"n\">uniform_<\/span><span class=\"p\">(<\/span><span class=\"o\">-<\/span><span class=\"n\">feat_scale<\/span><span class=\"p\">,<\/span> <span class=\"n\">feat_scale<\/span><span class=\"p\">)<\/span>\n<span class=\"c1\"># then calc the cos sim between ori_feat and target_feat\n<\/span>\n<span class=\"c1\"># Mosaic gradient\n<\/span><span class=\"n\">grads_adv<\/span> <span class=\"o\">=<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"n\">nn<\/span><span class=\"p\">.<\/span><span class=\"n\">Upsample<\/span><span class=\"p\">(<\/span><span class=\"n\">scale_factor<\/span><span class=\"o\">=<\/span><span class=\"mi\">1<\/span><span class=\"o\">\/<\/span><span class=\"mi\">2<\/span><span class=\"p\">,<\/span> <span class=\"n\">mode<\/span><span class=\"o\">=<\/span><span class=\"s\">'bilinear'<\/span><span class=\"p\">)(<\/span><span class=\"n\">grads_adv<\/span><span class=\"p\">)<\/span>\n<span class=\"n\">grads_adv<\/span> <span class=\"o\">=<\/span> <span class=\"n\">torch<\/span><span class=\"p\">.<\/span><span class=\"n\">nn<\/span><span class=\"p\">.<\/span><span class=\"n\">Upsample<\/span><span class=\"p\">(<\/span><span class=\"n\">size<\/span><span class=\"o\">=<\/span><span class=\"n\">ori_size<\/span><span class=\"p\">,<\/span> <span class=\"n\">mode<\/span><span class=\"o\">=<\/span><span class=\"s\">'bilinear'<\/span><span class=\"p\">)(<\/span><span class=\"n\">grads_adv<\/span><span class=\"p\">)<\/span>\n<\/code><\/pre><\/div><\/div>\n\n<p>On the last day, we found that all the other top players had about 300000 queries, which was several hundred times more than ours, so we used the additional strategy of warm restart greedy search. In brief, all of the above optimizations are combined with many different perturbation constraints ($L_{\\infty}$, $L_2$, $L_0$), and the results of each of these combinations are recorded, saving the optimal image as the starting point for the next warm restart search.<\/p>\n\n<div align=\"center\">\n&lt;img src=.\/figure3.png width=200 height=200 \/&gt;\n<\/div>\n\n<p>Figure 3. The final generated adversarial example after the EOT and other optimizations.<\/p>\n\n<p>In the end, our score was 89.999548.<\/p>\n\n<p>[1] Kurakin, Alexey, Ian J. Goodfellow, and Samy Bengio. \u201cAdversarial examples in the physical world.\u201d Artificial intelligence safety and security. Chapman and Hall\/CRC, 2018. 99-112.<\/p>\n\n<p>[2] Madry, Aleksander, et al. \u201cTowards deep learning models resistant to adversarial attacks.\u201d arXiv preprint arXiv:1706.06083 (2017).<\/p>\n\n<p>[3] Athalye, Anish, et al. \u201cSynthesizing robust adversarial examples.\u201d International conference on machine learning. PMLR, 2018.<\/p>\n\n<p>[4] Xie, Cihang, et al. \u201cImproving transferability of adversarial examples with input diversity.\u201d Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 2019.<\/p>\n","pubDate":"Tue, 11 Oct 2022 00:00:00 +0000","link":"https:\/\/persistz.github.io\/\/blog\/2022\/10\/11\/MLSec-write-up","guid":"https:\/\/persistz.github.io\/\/blog\/2022\/10\/11\/MLSec-write-up"},{"title":"\u8bba\u624b\u6495\u9e21\u7684\u6d88\u4ea1","description":"<p>\u767d\u7389\u5170\u98df\u5802\u5f00\u4e1a\u4e4b\u521d\uff0c\u6700\u53d7\u6b22\u8fce\u7684\u83dc\u54c1\uff1a\u624b\u6495\u9e21\u9e2d\uff0c\u6210\u4e3a\u4e86\u8be5\u98df\u5802\u7b2c\u4e00\u4e2a\u88ab\u66ff\u6362\u7684\u7a97\u53e3\u3002<\/p>\n\n<p>\u624b\u6495\u9e21\u9e2d\u5728\u8be5\u9910\u5385\u5f00\u4e1a\u4e4b\u65f6\uff0c\u66fe\u7ecf\u88ab\u77ed\u6682\u53d6\u6d88\uff0c\u968f\u540e\u7531\u4e8e\u5b66\u751f\u547c\u58f0\u8fc7\u9ad8\uff0c\u5f88\u5feb\u4fbf\u8fd4\u573a\u3002\n\u624b\u6495\u9e21\u9e2d\u662f\u4e00\u4efd15\u5757\u7684\u5957\u9910\uff0c\u6700\u521d\u5305\u62ec\u534a\u53ea\u9e21\u6216\u8005\u534a\u53ea\u9e2d\uff0c\u52a0\u4e0a\u4e24\u79cd\u7d20\u83dc\u642d\u914d\u3002\n\u66fe\u7ecf\u98ce\u9761\u4e00\u65f6\u3001\u9971\u53d7\u6b22\u8fce\u7684\u624b\u6495\u9e21\u9e2d\uff0c\u6700\u540e\u5374\u53d8\u5f97\u95e8\u53ef\u7f57\u96c0\uff0c\u6210\u4e3a\u4e86\u6700\u5148\u5012\u95ed\u7684\u7a97\u53e3\u3002\u4f5c\u4e3a\u6211\u6bd4\u8f83\u559c\u6b22\u7684\u83dc\u54c1\u4e4b\u4e00\uff0c\u5c31\u59d1\u4e14\u5206\u6790\u4e00\u4e0b\u539f\u56e0\uff0c\u7eaa\u5ff5\u5b83\u5427\u3002<\/p>\n\n<p>\u9996\u5148\u5047\u8bbe\u624b\u6495\u9e21\u9e2d\u7684\u786e\u662f\u56e0\u4e3a\u751f\u610f\u4e0d\u597d\u800c\u5173\u95e8\u7684\uff0c\u800c\u975e\u6210\u672c\u8fc7\u9ad8\uff0c\u4eba\u5458\u6d41\u52a8\u7b49\u539f\u56e0\uff0c\u90a3\u4e48\u6211\u8ba4\u4e3a\u5176\u7ecf\u8425\u4e0a\u4e3b\u8981\u6709\u4ee5\u4e0b\u51e0\u70b9\u95ee\u9898\uff1a<\/p>\n\n<ol>\n  <li>\n    <p>\u4e0d\u60a3\u5be1\u800c\u60a3\u4e0d\u5747\u3002\u5728\u624b\u6495\u9e21\u9e2d\u7b2c\u4e00\u6b21\u51fa\u73b0\u65f6\uff0c\u4f7f\u7528\u7684\u90fd\u662f\u5c0f\u9e21\u5c0f\u9e2d\uff0c\u6bcf\u4e2a\u98df\u5ba2\u53ef\u4ee5\u5206\u5230\u534a\u53ea\u9e21\uff0c\u5305\u542b\u9e21\u817f\u548c\u9e21\u7fc5\uff0c\u867d\u7136\u9e21\u5f88\u5c0f\uff0c\u6ca1\u4ec0\u4e48\u8089\uff0c\u4f46\u662f\u81f3\u5c11\u4fdd\u8bc1\u4e86\u6bcf\u4eba\u7684\u90fd\u662f\u534a\u53ea\u3002\u540e\u6765\u8fd4\u573a\u540e\uff0c\u53ef\u80fd\u662f\u8003\u8651\u6210\u672c\u539f\u56e0\u5427\uff0c\u5f00\u59cb\u4f7f\u7528\u5927\u9e21\u5927\u9e2d\uff0c\u8fd9\u65f6\u4e00\u53ea\u9e21\u53ef\u4ee5\u505a3-4\u4efd\u624b\u6495\u9e21\u9e2d\uff0c\u8fd9\u5c31\u5bfc\u81f4\u5f88\u53ef\u80fd\u98df\u5ba2\u7684\u5957\u9910\u4e2d\u6709\u65f6\u4f1a\u51fa\u73b0\u4e24\u4e2a\u9e21\u817f\uff0c\u6709\u65f6\u4f1a\u4e00\u4e2a\u9e21\u817f\u9e21\u7fc5\u90fd\u6ca1\u6709\u3002\u5f53\u4f60\u62ff\u5230\u4e00\u4efd\u6ca1\u6709\u4efb\u4f55\u9e21\u817f\u9e21\u7fc5\u7684\u5957\u9910\u65f6\uff0c\u5de8\u5927\u7684\u5fc3\u7406\u843d\u5dee\u53ef\u80fd\u4f1a\u8ba9\u4f60\u5bf9\u8fd9\u6b21\u8d2d\u7269\u4f53\u9a8c\u975e\u5e38\u4e0d\u6ee1\u610f\u3002<\/p>\n  <\/li>\n  <li>\n    <p>\u76f2\u76ee\u4ece\u4f17\u3002\u624b\u6495\u9e21\u9e2d\u65c1\u8fb9\u7684\u7a97\u53e3\u662f\u94c1\u677f\u7092\u996d\uff0c\u94c1\u677f\u7092\u996d\u4e3a\u4e86\u52a0\u5feb\u51fa\u9910\u901f\u5ea6\uff0c\u4f1a\u8ba9\u5927\u5bb6\u63d0\u524d\u6392\u961f\u70b9\u9910\uff0c\u968f\u540e\u6362\u53e6\u4e00\u6761\u961f\u4f0d\u6392\u961f\u53d6\u9910\uff0c\u8fd9\u6837\u53ef\u4ee5\u5728\u5f53\u4e24\u4f4d\u540c\u5b66\u90fd\u70b9\u67d0\u4e00\u7c7b\u7092\u996d\u65f6\uff0c\u4e00\u6b21\u7092\u4e24\u4efd\u51fa\u6765\uff0c\u6548\u7387double\u3002\u624b\u6495\u9e21\u9e2d\u4e5f\u6548\u4eff\u4e86\u8fd9\u79cd\u6392\u961f\u65b9\u5f0f\uff0c\u4f46\u662f\u2026\u8fd9\u5b8c\u5168\u6ca1\u5fc5\u8981\u554a\uff01\u624b\u6495\u9e21\u9e2d\u51fa\u9910\u4e5f\u5c3110s\u65f6\u95f4\uff0c\u53ea\u9700\u8981\u4e00\u6761\u961f\u4f0d\u5b8c\u5168\u8db3\u591f\uff0c\u70b9\u5b8c\u505a\uff0c\u505a\u5b8c\u53d6\uff0c\u672c\u5e94\u8be5\u975e\u5e38\u6d41\u7545\u7684\u6392\u961f\u73af\u8282\uff0c\u73b0\u5728\u5374\u53d8\u5f97\u7e41\u7410\u8d77\u6765\u3002\u5148\u6392\u961f\u5237\u5361\uff0c\u8bf4\u6211\u8981\u5403\u624b\u6495\u9e21\u6216\u8005\u624b\u6495\u9e2d\uff0c\u7136\u540e\u518d\u53bb\u53e6\u4e00\u6761\u961f\u4f0d\u7b49\u7740\u53d6\u9910\uff0c\u7b49\u8f6e\u5230\u4f60\u7684\u65f6\u5019\uff0c\u5e08\u5085\u5f53\u7136\u4e0d\u4f1a\u8bb0\u5f97\u4f60\u8981\u9e21\u8fd8\u662f\u9e2d\uff0c\u4e8e\u662f\u53d6\u9910\u524d\u8fd8\u8981\u518d\u8bf4\u4e00\u904d\uff0c\u6211\u8981\u624b\u6495\u9e21\u6216\u8005\u624b\u6495\u9e2d\u3002\u70b9\u9910\u53d6\u9910\u89c4\u5212\u7684\u4e0d\u5408\u7406\uff0c\u8ba9\u624b\u6495\u9e21\u9e2d\u7684\u8d2d\u4e70\u4f53\u9a8c\u53d8\u5dee\u3002<\/p>\n  <\/li>\n  <li>\n    <p>\u7d20\u83dc\u53ef\u5403\u6027\u5dee\u3002\u624b\u6495\u9e21\u9e2d\u5728\u540e\u671f\uff0c\u7ed9\u914d\u83dc\u5347\u7ea7\u4e86\uff0c\u4ece\u4e00\u4e2a\u914d\u83dc\u53d8\u6210\u4e86\u4e24\u4e2a\u914d\u83dc\uff0c\u4f46\u662f\u8fd9\u4e00\u5347\u7ea7\uff0c\u611f\u89c9\u6ca1\u6709\u4efb\u4f55\u610f\u4e49\u3002\u624b\u6495\u9e21\u9e2d\u914d\u7684\u4e24\u4e2a\u7d20\u83dc\u662f\uff1a\u6cb9\u716e\u767d\u83dc\u548c\u6c34\u716e\u571f\u8c46\u4e1d\u3002\u4e24\u4e2a\u7d20\u83dc\u5b8c\u5168\u6ca1\u6709\u4efb\u4f55\u5473\u9053\u4e14\u975e\u5e38\u6cb9\u817b\u3002\u8fd9\u4e24\u4e2a\u7d20\u83dc\uff0c\u4e0e\u624b\u6495\u9e21\u9e2d\u5e76\u4e0d\u642d\u914d\uff0c\u5982\u679c\u80fd\u591f\u5206\u6790\u51fa\u624b\u6495\u9e21\u9e2d\u7684\u7279\u70b9\uff0c\u5e76\u7ed9\u51fa\u66f4\u9002\u5408\u7684\u914d\u83dc\uff0c\u53ef\u80fd\u624b\u6495\u9e21\u9e2d\u7684\u751f\u610f\u4f1a\u66f4\u597d\u5427\u3002\u6bd4\u5982\u624b\u6495\u9e21\u9e2d\u6574\u4e2a\u5403\u8d77\u6765\u5f88\u5e72\uff0c\u53ef\u4ee5\u642d\u914d\u5e26\u6c64\u7684\u7d20\u83dc\uff0c\u4f8b\u5982\u897f\u7ea2\u67ff\u9e21\u86cb\u3001\u7ea2\u70e7\u7d20\u9e21\u3001\u672c\u5e2e\u8fa3\u9171\u571f\u8c46\u6ce5\u7b49\u7b49\uff0c\u6216\u8005\u63d0\u4f9b\u6d47\u6c41\u9009\u9879\u6216\u8005\u74e6\u7f50\u6c64\u3002\u5408\u7406\u7684\u642d\u914d\uff0c\u5bf9\u5e76\u4e0d\u600e\u4e48\u4e0b\u996d\u7684\u624b\u6495\u9e21\u9e2d\u6765\u8bf4\uff0c\u53ef\u80fd\u4f1a\u662f\u4e00\u4e2a\u66f4\u597d\u7684\u9009\u62e9\u5427\u3002<\/p>\n  <\/li>\n<\/ol>\n","pubDate":"Thu, 01 Apr 2021 00:00:00 +0000","link":"https:\/\/persistz.github.io\/\/blog\/2021\/04\/01\/%E8%AE%BA%E6%89%8B%E6%92%95%E9%B8%A1%E7%9A%84%E6%B6%88%E4%BA%A1","guid":"https:\/\/persistz.github.io\/\/blog\/2021\/04\/01\/%E8%AE%BA%E6%89%8B%E6%92%95%E9%B8%A1%E7%9A%84%E6%B6%88%E4%BA%A1","category":"Daily"},{"title":"Blog Demo","description":"<p>Lorem ipsum dolor sit amet, consetetur sadipscing elitr,  sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr,  sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr,  sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.<\/p>\n\n<p>Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.<\/p>\n\n<p>Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi.<\/p>\n\n<p>Nam liber tempor cum soluta nobis eleifend option congue nihil imperdiet doming id quod mazim placerat facer possim assum. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat.<\/p>\n\n<p>Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis.<\/p>\n\n<p>At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr,  sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr,  At accusam aliquyam diam diam dolore dolores duo eirmod eos erat, et nonumy sed tempor et et invidunt justo labore Stet clita ea et gubergren, kasd magna no rebum. sanctus sea sed takimata ut vero voluptua. est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr,  sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat.<\/p>\n\n<p>Consetetur sadipscing elitr,  sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr,  sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur sadipscing elitr,  sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.<\/p>\n","pubDate":"Mon, 08 Feb 2016 00:00:00 +0000","link":"https:\/\/persistz.github.io\/\/blog\/2016\/02\/08\/Chewbaca-is-talking-in-his-sleep","guid":"https:\/\/persistz.github.io\/\/blog\/2016\/02\/08\/Chewbaca-is-talking-in-his-sleep","category":["banter","thoughts","dreams","fears"]}]}}