From 78a6c91f8b3da1af54e21c8c7fd8217529ccf34a Mon Sep 17 00:00:00 2001 From: "Wei (Will) Feng" Date: Mon, 12 May 2025 21:03:04 -0700 Subject: [PATCH 1/2] FSDP2 tutorial --- _static/img/distributed/fsdp_implicit.png | Bin 0 -> 41760 bytes intermediate_source/FSDP1_tutorial.rst | 448 +++++++++++++++ intermediate_source/FSDP_tutorial.rst | 631 ++++++++++------------ 3 files changed, 745 insertions(+), 334 deletions(-) create mode 100644 _static/img/distributed/fsdp_implicit.png create mode 100644 intermediate_source/FSDP1_tutorial.rst diff --git a/_static/img/distributed/fsdp_implicit.png b/_static/img/distributed/fsdp_implicit.png new file mode 100644 index 0000000000000000000000000000000000000000..85b19b7e72e87ddc4b8f6ef076b3b313ab4d4156 GIT binary patch literal 41760 zcmeEu`Cn6K_ApweYFoFqE+C7Rx(yWy7%;4Mte_%LMTihWS`i|IfDpnKlE}2xR<c1OF;}Z#ri3+ON+yo0!~*HZlDj#vizE{Cxvl z8+CrWo8G@|vIY3{HgJ9M?WW&r0`I@w{6|gGXWzc|M|o2P0B7=1;FipnKx=DA&s$d;eY?_~r55jijWTuJ-mRDJga-PImE$2zv)G7;OKcqrIc! z2>{{*IxQ~gv(yuDXq(>}`J(W#NKAsHaJH|9SK8 z9{;1})qmG?aCH3eKdb)7tG`waLPsW^iAMo_CVBpwUcXoV&zHYfbhqD_`hTF}H$4CP z6hO4+8}9c1BAVwLjz9jh6QH9-^v3~LfIGl(zy9%n|Hpu9;|^R=77ioZ4w;yoHaY+C zM^{oesmeDMjxxAO#0=|PiCTTh#8-gV35KZ|kym~`3kEUt0r!M6YQ`rxZ9lP%XqxBjKp?`<`|*O56w zW|Uq#^k3UPwQJhy&%OOIh9`SA0iDvv-hK1GcG{7-f9X%7{G;Bj4)WHU2a`$a>Z|`{ zE&$YyXHT~L3;0h>#GReoyXD#h8@G}YxW2045xOgXT$-C`UtOM`o~ZKBG5ogkjJC@? zTdW(B<@Mw0r3u11C`{De)~@nWwBwD2&X!6HCj{9kFxL%eFR*pc88YiAanD#@)N6O9 z-JH|eNyIphH@j`7M7v*w-(cY(88YmulTe`6?$O+2|I>i&T7A6qlMiQQPs*m)@Q;f3 z6U$9_1EsUCyj*U0!Q0mKU#|-hq|v4VUNDW7-aJqy^z>a9!U@L808|My5{850j2nZR zW+ief@mFfl1zpb(&cd0hV8rwjkz4Ti=S(&zc=S?Rzt6Ori<4DCi&GIkU zrylNX_*}0yMuU#X2>7s-*a0I}8XtwvHQ=T+hBE)W$M!V7bs_^zElm-`7Qr8m`F3BF z+mM;;dXfGplE;c5rp4?G%J(c;>c%Rymn`+tLLD@sv+e}_!P6LY{XJRXZE$vm6OkZHgD8t-6}&BSB#ywmoqnS&?Nfb4cfYBI=gJ4JvokyT$M~?rOP*+ zNiN^EuP*wQ6Yf34?XG5Z#o4kM#;;g}8BPhz$8mnNRa;ycGhiH`H(8FonHHau`Y0~u zcpQDAvY=%=?-D-R#hf?G??-gqc(fvt^*@;vAxIikdcXQPD=pbh(5k_WdCXatpT{?+ z6KPAh=bF+hTWkJmfLSYf!=GsxpW@D(#-~ee=1oEyN=ErL!|} z8{@@2@L+dlK6HNS^&-kXYHbr0FU|T{H^^m7mFlcfv4wT*jNJ3|)&;SiBrF;V?klOo z;h8z9*-e)1<5@mUk0xevt_hhUR*8PNg~Ua)J5TdH)>Q7nN~}*@3Y*%xfz6(|mjf() z?$R&EczK=(ZpwPvyxq!jzi}j8HvZ+074DvfrOArWCGq$@za>qgy_@RLr<+qUa>G_y zm)1Rc(w1VapT|qHIPLB)w-|fv-jw;JkkY+H2+T%)RBjWmgw% zF|o_muK`BWIYTLAg?{auC;{eUUuF_BvSMt6IoxucsJ~tjt~`m*1l7=B^isVwf4{zX zUSXAz7L9-8MI&5SpZh>>F@Hds_Jw3%goJNIhm$<(PJb1aZujq2h;&^%;-I;%u3tbwL6N}Upqw6_$Z4e?owleMfETFs}QO+e|&nv z*rrMxs~n3?(jabF^G_*+{Kk|un{FyonXUzk`?jrgZylALj#VD%$JmUuM6<#PJrwx& zp-W+g;VqpBV@Q^v2aX0+ zKg>&;3Q-cuk4!x&5*^j%-Z)-Ce-=fL+@VXED|58Y(TZr+xxcgxLXd-f&Vvwt8_A;P zYO>i=?G1k<&G~LQULYQV(P<}In3H9s1=Q`IiEMd0eyY!a-60U`P^#e^<5 zoOguY^|p4nHIAxpiAh=$JZ8?gPQN%6kvfj%KGi5#!RTJ4U3)QA@F-qp21U}Vvt7*l zj)H6g5&5PQ8cZb^eVELH5_HD0$m;IHu*H8Q~;XTu)49 zQruljOY&lUJ|LI3l;o3@7p(*soWcWvrG{XKZNs(s?J{rB`7@eahsJ@h8s`0>V*$LJ zragTvw6-#Ak>v5-nNXUJ&!%DGD8HQZs)OqLo67SPK|>g_hX|7fPBv5%nOCeBkO`3d zSPliuo_DPvPQ7m^(|L#i2l(Qyv;l>Lvih3dNpfF+uzN$H6Cl`UuBf?-LA3j3kBgZWAi>L#wy1_ zbD-e|Sl6qv>6I2;fv*6jIS?IraYY=UvqFs4cY9WbCxfC`j5uw8f2D=027l%A#o*O? zT4W1eja3-d52o-a>PxL(yVH?%lwMXcl9!7MZ4AuKk61iO4pKxUOW@ zpHMEetX$dwmXFvb_}~{(K9F7lr*m*J^$5)qNn$gP7+dVdDlt{Mb5B1Aqf&g5DXo2T z^%h7qlzGfr_~EoysAQ011kSEeGG*86gqkU)Zx0 z5M{5nca))MEnKrH4WoirLY?VS^cBqo3E6@*d$m&OPq1AYR-AN%bWv?yNipA1@*cRP zaJpQgJ+JF;2CcNsC`;&$;p59Qwi2---O%eSRF()Vg*fqzGQG4U15$!1@Ie)qs@}DY z7}hGNNF^x%POe%%#*S?{asoN0^#~jSKjtN$^it-`Rt;O{;OqfqYTf;FnP<$k8w4_T zO*pkKP}hVr78)j=`#T3A_MR~}t28JKkJ~jDaU6BDS*zm@h3VZ7qa|H9-)Qs8OcKPo_0Q)CJ^}CO5n<4au33mz<0t zDcL*S;p$eT#I`Fqw~=!ME36p@j~t=985eoW{ANu`{UxG#<>*QIvKchA!n%{(0!mi& z!gYdx2|I7`Ior~t!T1O8u9Pt3BZl0Y74e+yObtSbs@PrpBSIvK-`>TiNAMQR9YlAL zFk0Ve(9^>8V-Q}?grUbqzQ;wVf=&kmv_rNehiUJAmzOcz8Xcytukws|L^LW}a6U?m zpD-6DbhkIgHugQ_+oOD@7lS-zh{#-`a?rO_0T+^}0uDB-YU(;E0gZ}u*9c2Am&ZVJ zi~MwQD+NM;l_InpkmL$MUd=`ppP~2A&EX7Ycmx8EIWgUzcoCOFlZqd+TPnaWLc8rE z9Pl2Q_SBteuxFw(lLt;(-x58IBsZl*hUd#}z>=f$cjgEdejLy~7Qt~SN>vsSle0<+ zXED|uGx}LW{-ri_-YSxkR*p9(4&0X(R>!j9D_Nq-0w&hWbvzo{%|*N@)gQY}LHd@4 z5*DmxpnZA>fl8WN=-=-|=zg45zwE|18Rg&z&x2$*ZCS}l7k+|1w;oEw3?5lZU=Vbe z?d1%$H?D{lONJ`XTT0Fyg$nXDRtOkqF3fkYZfvk{tiKCHBJyRaSVKsd@;VO0PMeLq zrl!Dq`Wk$~RB#O^bq2n~S<`wptoK%oR5#wcEsWBOstt&^MzFZvtMA9`m>p>WN#I9* zIrl(Ubd+v=?Obdu6f0-lhYA??h5bjS1tkTQijg{75Fqc}28&omnjHr*-ZERFIWU7o zHCYR&2~N4_e1HBA?E-I312PF!bawZsXx?uwlIi>OOh#u)Sg2b=hJ7cF52}VulSUs$IEpe zG-pc^d`nB96nHo@Eu(onIswf8fR~_aC+%Ewhexn``nVklmU~XgI!G zcd?&XK4Pd`y&uWxGq+{tHL0oi!ZTuN>fz=lxEbO^?|FA0Yq=8t%!AHQ@J6Fb06W!o zryM4PXlAC4&QU6wF11l2?V@WMPP*ch7)&}|Mg((J7PeSr)+OQPcG^H2SZFcWP6@Hs zSNXZbdoA^hqHO_6XB6D=Di4+wfs-_~Hl>U6;(iz5{d^oP1qHe-r!OH~G<6P03;8F= zSH`R8zLw@D4_Gz-!enJ(c|8T^M)epR7_t$JS<&=y-+v23@Y~f6=OY!w=AElpqVBBr?1_u=B zSs_t4%@v~-dQQ!FP0Qqu#YviC->&B)jUm6>Un`H}*s5E^t@=g&lfH)1xm9J#&u_SO ze}jsIOy4wyZZ-xy5X$E7-xzy3bqBp@Vvq&R6}ndRjeBZ8J6}5d{LW9z{tWVvN5)v2 zaoDZ9-d2d1E(R!r31Z#z?tTP_>T_!(X3i7_T}(rY6+TPFi-9DXne>-bA6;Ts0*L4z z@nwoQJEjjTtZmaZ)mJ(;%Cb`oJe4=NIA{pqR>uSsb@w)_A8Q`fDrdsflxm{?<85&U2bVsyZedI{(N`QW|z3n*2^iuXDB@-?Zs+#nmAe| zkg70CwAHr47M(;P%?Dq}0U84*3+%S@AgsDySod(wD?+S0ijn9mu|VK!FQLy_=L|6@ ze58CLp+~QfIGk^-KV(KMj!^uafxJvTMKgC4ic&!n2tH}FADcvt3=_hb{R*WZ%30gF$evCzOE#DDgtdS|~ z%97xb`r*T+vAvyQ@LYqij3%eeom92f@K#;NvRQ}^P`E^ROdM$iI^7!(IzzoG5T*)( z)t9G_5f>%BB$_3^lo_omdE^jdNsPlka=3dBoR{J5vOq>cGny!=Q=c6XVBjFe0u7!h zSx6XHxJmBC88{gv7>nzHwpa@B(e{O*+W9HGkq>g0j^L1XA!lYq#o%hnREoG49UQ>% zpawe0mCGU8zjOD_^*by<0Us;5e^_EJ#{-@koEhSzQC$s&!zArpH`f3w3ta8Tg%U`- zQzU%~IIh#2+`863h7Kw1CV|BjYZy~@o82(>zHuJ1pOW*R_&Jl zs)cFZoeaa9)fkfXvek=T?e7lObWtSvEj0LE4_eO85))J=!pI~ z0nnB=M(O@GOXxl!sD2Ryk;*K**FrVv28y6&l-!z<=aXQYwa1@kVWll%Tz!SP!DJn1~_1$`Rs-BualMJRZMomj6zlVy|i5_I?b z80Cn0PyIr@7L4ckCf7RMjf9i=r)HoL3UPYn6UJ|pJ5^4Zrs(RxX2TgxGK^OA0=d#_ zr}kbWu?IuGzBhl;vlo#Ehdg~eUP46ndia(4yUB;}C#^cKlH*?}_lF1CRTetLD#K&@ zeb7NRI3Ka#d>quXHlJaJeN>7!uZf41!fbQ3{CI~Sf)g*Si>`XcnYkdfE!gh!a4K`I zN#)j8ONlJ=7bbMNGa)C)>S(SD`sJ#ntw=P1w8iB_=C~5m`eTCoSs3bz5R{|0EI&=q z0Lopy(6`nTrok5XbaU8(@ELSGV|Kv2Mg#jAc|h3S3j<$)%g+){WZDaF6L5eeFN!q;tciGgYk1VIEb=v ztRlH7PJU!PyLSF@p$E8%Ob=d?4lFxa)s}-0Z1WWJsOplILiXJTLqmm%bCI>U*pRD^ ztpqhM@>2yWY6&1gsTyB@ntKbN3A4V_bK4E)>_O@xRb4;|3O>URjyMNL_Z&nevSVgK z8POA+g}K7IC<79u&DURrh9a8#0}}3s2nMA;kL4}U0~F8MD^J+mlq;x22lN7Zy$D%rzrEq-Ffljg903{S)r;&qYW#IM zMvv-)Thq(6ckhUAPz+Ms`z!oBjN|PtSl_8~znTIzNgrufK6`BEJkR4*Me)q@8g3N< z)~B?i`Y404ho3*JY*_e&t4rW0=nS!n8jdGfE2)WSd}B)B2Rx)ZvK%Ix0M=slT+CRp zJZ6mxn;#v7=x5TUP76g7_29L_v5TM6HQO@4@CD-@P1NG&#qc-i|1^9fZWrra)gBP6 z$5uhlouBkKPRL2?Dsc-5#A)(IIvM*3FO*C`0Y39YERL75GRmoj*kTQ!*!sJ2^Wnoz zD=%>Gv(vJtB2hGK!_f257IPJpnXJQj8WG^+Fpu`mI6kE>{+Wa}K4lv$pL6$S zB|;v`&;cvP zm6U9T2D#`TC(C~j@|ck-%6L-)yqncmSsH-KacdpbD5jN*mQwj>B&1ZRVBJqHKEr=b zUz{H?6hmZCtxC=wvYrul#LjaPAx>oq+$Pk6sNvF8aauEg5Ir^h`cxV(7m!Wt}hEF)(E##^{x zEev0(Dh-&3Yct34DOpcfHCtDg{C-JG7{4Ie89V+keOGIgn_izu-g(&&N{EW!7+(9n zGys=ZMa|1-)PGmOZ@M25BTKE0_iQYt-}4rDMjV9+JxK}g5(3^@LHL2f6@1Xj+ziWp z%0^8tWRgc99^D7#x#B2sxwkBM+Te0y7%p8DMR8#qCpT$ZLCQWGWo-lc1UN^bOnez* z6|E|V4sctL+j~RI6|~5)g7acbgggi{#O|||BN`CN%t8l`WPGDCwC1y@!G|DyUocKE zFvD$9h+}q8XtSR$%!kSZowaMZaQTpzvaA!ZMx0!tlTK(+(Z3vjmU63lK-QNr}!NA9;}6iP2RDcUCyDBY3L_Rg47%?)$0nI1UvakRvM{ zccB`{QP7EgMQfB%bmrd9NY76Oo7O+DmmexE@s%0Q<(AJ~^wKC+K@-u})dg8vM{n|& zGYwdK*!Odqg1DfguoQZK;!DBh_;>A%F>y?WtMlCMJN;FHasxp6A{{0cUuhAm+z+?{ z!sf2$7Om!%XIQGGlXBQ}wRhsIsvd~pAJ_>6%pp7`fG53)*AVg5Z-w-mYeD zX{UnKLtes1(`}cBoU!yhM@#9-46%h7Vtfqu-dKM-pimV=vCR>v{)o~%kX>u<8J*9D zshVn+{<(;T3F&0j`I>u7NSs->!!SabG!%tY;Yi|{qJ)@=kTcy^LvsU-9@OP>f?z^) zoelOkmhl?2K{ZeV#v$L=aN#Lnia6gbW!y!U_8*edE08ns6D3fC5VYb(h|Z1;Zfb3F zTsyWNW*wc>;Diuf!tw|8DVo}QYv6WJ1^)rEa-}mZM_}QC#MOqxl6!)!U2-e~aR%fG^Z_*qH^dVwhP)tuQjqYz?o^c9cT@+!i%+V0jY`-(69)yh(h6u69NB0 zfM}_8I>1}p;}ww|bMAHltP5kIrH4RpiS=3}Zf~VR5vmhFRExrqv>^2&DeZi7;y{?Q zGFL$_*YuXKsy)#^kDQSEn1`mUh8BSmJ*FWOm(+td!)BiQSc9|}weY9)wdWX#NSGX5 zIjU$eI>zg(iLr_nirP)Qj~e79ukr*X;*n)cetKP84v{TWr-mwZmz`%w3Br1ZscS?a z7KxHxjQ}F&4>b#-sVR4eVOYod9*ldpZ|R>{4q}Kd)Cp|Mv%hP(>^0XAO3P>#L=Mr5 zsfmx%s{9QAB})5_rBG{mT63tf=VON?6hRtI;K)l*G2ziPIzr%|lbe%-_M6ARpyDgW ze!n%Wz-J!hgSFlaA3s_kjGDTWkQA8bSiAG5=Ur(0x86HTQJ3QvKY964p9P3Pk-Ooj z!aMv1K9n?cPU31Mm6<1!4N~m~-NQeWYqa$A?u%m{)Aynj@+T3s7O_H+s;7s6co9Vo zA?jOqpgABQ%w9$^#4_{1UgkAq&0`$D$X|8?E6*2#A=B~E*W<8DBMClCgjJanQ3!xF`q5?6s~1P;qvtZ+X5le_mT~qsZiu2hdMVTCt=}cTUbh&syVSAvl8S7 z%M%|R!WBpNwedMkRWH{xJ+y`F)?Qx*TQp1^U71~_g-GRR01l09h@U&oDbX1Ow7sqlQS}O!GyUg1opmHSsE-SQ1lVsPD>=~}XaHw5D;c*zi ziip5z>F4{D<3}Y|NqxS_mCJ*D7Gyb|N-+ilY>h8EBhG&Kxcd4v53Oa^uxje>1ZD12 z3wzHAQTB+}{mMn}l3q+9RWq&#gr~OzV}n^-pM-aLD_42po`-#%wCJ^I$pk!6eL{1G?Y--fe&VhAUr z4Dj9S0}mGnS8U9N=ZnRIgZ5|hVtUsv<%G(es1U;L)vX{{&sy&Y+bjIu(}9)A$3&!C zC1l5#5r4A15^t+@4_vW%2Je~wio*|sB;vhuc|Tk5?{C>zsT~-lCFali3|l$$GDC|f zt#gb;&GK7Ed_cJdaEf;DR2_QQe%cAYJ#*q+nLNku^WXD5CjVv%WO?YMa{C3t&dc^+ z=~j0UR_D8mi0gk_gna9@zKY)eGYC0P8}|6%!TGJL@4qsxEZL{q4ddDG8dow_OD1=2 zJ^ks0)4V)<)Ju7ht&`Gw^?OcsV5 z(JKI1K9|f%k+iC}>dQX5^;(0|vF|timK1aK;J5H21Ry^pddJf%iR_0jrDv^;5S$B{ z6bA|q!+$oJ7vO)Zs4IZ|<|I^?8iSVf5cJ$w2))dqb%}5lubRks?8!dPhn(J+%va zBL4f2eoGtjJh5pV^`UW4RrIcwwr=?_;LXo}it|0a)V{ViON`Lh3^b2je53urEqwLk znc~52FUYB_tx?dXEg8+QrxlNRHbAnFX5{;~{%gw5tJ@oWJ@q?u_`fFhWXg8~2uzE; zbo)<9J%6P7q@LMq@fGWr>Tllu&)lXz-rnla09ivicgC?Z|0n-x1E%c|kT*1bGw|HD zzh+1M+QMYo#?Y*w5B__W)A%<4JH$UZPW7KzS(;nFy$WRMtXn$W>JXXE-lQ_SY>9`Z zuco{62+eamfca7~l3wk=SaK}A{xZ>m4)u;_skNl$jw1VwG^K@&+@H{8IgzkN{qX?r z*pgGYZsN(afC8~Kz|LoR-|om-TU(h62rA9eZ?M~T?M0Sh{9UL61;s0l5mHW_JuwBPpN1f@{JTUxtMo`59EbyJ>b7(>*y~(OD%Ui z%Uz{)xv4mr(7pTqgy_lqx1RvHHS4u`Yl1II)|h$gJ-n9$=+X7GZ@L)k2CcFk-~Ppp zwSk!w=JNW9xTe{6!y<|za-#PRhJknGw=aA`tsW0c^62wZ(vF;vw^xxc(qC4cwg}@6 zbY2N>VX##`%<7yjfwf2wdY$k^4mvS+kRP_sYx;&ti~#p?69#=6qVS;;F+NSo?|k_q zeI7`O5__H3|Gq|0ru%7W!fea6T57|R*rDtWu!Fcahu{B?X)=z{jqcCf!9}htjV(P649pPb_|ZQobHFTodo5Ek%4Xrzavf1EDy z?sw<51qOOXl{tg`SNhLR-@_YX+!v_WU|a@jA_Uj~9<{0^#YnYpI7#hW#btrbF7#Oi zRY_hRwCV)%F{BqVgr7)-@g6x`d5aU`1Io5eSc@Ur`cuk_JP{3^!e+U$uFd~)5^14@MB@O zeahW@P6FT*K4NrUjWx$p;*lt;YKE(?tYc%DsU_=aR4sBK4&oWx{wJpp)dRK;A_c77A%jiV}07#>Z9hkkSZ9&I|CME=)d$N&n00D(G0=ULA_N zk8FR!FKR5<=WKIoVTx|JEZ;A3t1wU^*UVeVU6^cxV@x@De2r$|W3CGvZ8>VQQbwCQ z79|*)srGa1+i#V!lp{poiV(A+3%Rgt{|T`5XOhm)a$*C9-Ah9D99cSoaf9<(*K)~f zZx;Frk)wf@_*_c?Au{sHeK_%Xx9N>@Ns&%N$t;jHvL?QpB9Vmn3nqH))%*XV@IK63 zBMX#;*R$fIk4I|p<=EoQTv98BeInx*x`atIeROg^iGMB0;C>NfzqW32>QNZ1`%qr^ib8ii-&8{mp zJk)RwV&HZ=AU^Cm zr`y_hpPR^qK>}m?s#i{>h-|I4|5-72?9H@?zD|rDvxjJ_>ak`A_!agkB>QmeBJyJU zON@PK<6L6sJb4Yo7Lp@xSK>X5yuN7 z8u}gNXw^tiyCE-8Ia0lLqb-Prz6?YfHR5A%2_{zMR##Pepi|U45jrE0z9^%mRA84W zDtg2t)WTSPCA~3aJfvBO5X<;{Hw_&xcmdW5#)neU9@$0wg=yNi{m3iIfi_LxSJaSl zFZe_Ds$Nc97p$r!&GsdMEBXb%bba#|>io+*4~A}iI?s!pvA$*$O2bEWLH@m{;g+)C zjH;qFPMZk6rV0SaFEyE6A#YojoCKs4Jk5Grz;tCuL9)U#Mjr_dSZ zCkXa8)SsN2zI(#?&ni1-+iJ!D+i!)*X64*a z*BpKYao56pjs0G7H#jFH8B9%xsi{Z)At*J{D@%GZSe~)7tW({5Ps=~N9EJjk+io7RXgPO zEFO{bR$B|_5y>;{iu^Q)`V!mh%yf2T1Jq?stJ(u0E~cm|u~F;S%N;y)nop2RhQ||u zrCOvV_M|tsUr`$EMx=}vQ#IdEG43_^!a(uy+b4mIt3roA>yAyk9}<%?9KG?sn6pmm zJ4x*8NB)@+eIWV~%F<1ziWEE7vhEidhn1ZlmI{NXdYvVA;~k4T&%tsEYP3jhG_v&i z_ZYNUXMfpc+io+$qt7J6y%uG{x_EluGR$Zu(GQ2W^d8ninyRMF8!2@}A*qFX-Vh`~ z7RPt}{3I)1Km$Ox6t%=|L#Z0_<=q9Oo3m|Lv!7|0^^MgRj{Y-j5-#+$07Oc4~V@y>N<985N*y(E3zOw3rns= zt_1f1&;qUaHV?;o$3Ky+J7OF0e@&tFc#>sQ|{ij-v)2r%RTt(#%6z||b;Cif9@`79MFPpHVMSNKA6EQOw; zqj?$k3j7Sge#l4bg47_wv-hXn7|0@)oRN4As(Md z#Lg&Nyy?+#0cdwkK|98y$Qfi$lYZ&x>AZwIng|K7wUy{T%bG@n66Rb}z2<=6nqbsTGK57<8A4USbD2aFi@sGJX z!?eTmwc3mDlF+%($ZL{eRR-Eq4ZDNwd>JEN2z*N)YKv*n+fjkoyjiM!BC2p=uVlD( zx|o7gEhuD8UE$6$Ww=9`k0Qps4~C3^&bw`&%S1Mf+YeY0k_>h6!II!{+vBH}t{-nN zW}sTr5p3>pq%KT7)a5W%iqrmL{%3`A+Go<`#A-(Elc1F~<4U`BE#sacrR+7)hQJjL z10jO;q~sYIBI9F%Y`TQ8u&Pr_eyBb{Sm_CIA9gG>Dp&2Fe(4p1BDv~FU@);i!FD^4 z+pXYa}Lsemqw zmD-mAG3($a3v4GNi>?;JGAtl9OrJwthLlGb+B{crofCzjk=cfMWm8a>5rKZ!@$ zVY28V_I&zLhyCR0rc6uIMfaCyU5UaxgC+}s?e2;&%xuf3Myw3Q@BQ09CeLmJzIWgF z>GY((>EZ%)-?s3&xhCas%Y_c!9hJ5$%ouRId?!z-*l{i8=w`g<6^)xa6+7aV^6RWw zZ*KCku3M_~kfexg6kl}MZp7f<=zVspBWEkH9b*k&gikI@r@Tx3Ywv7rdrF(yx0&_f zIuJa+qv+VSIpyFaLRz^Rt66%Jr?Z^(kIDGr7O^02A=c}%Ef50kSU(KdEbEa!y=8jv z=B?9fw72lJr%ZMgLEnC|aJaPS7IZxEU8Bh+FX|lr`7@faiIAW^V&dEY$5>V3B(Jf+y`v%hV6=M_yuxpQsx1D*fC znNO`VQwWTupwEHe+z%Ik7J@F103HgkDf0Gw?>>XG+1`V%Hf?KLFiVfD-}z+Eqqj2s zHqRN}Vj;szPdnV|DBU$_mY$z;aQoJ_jU5B~FZ_;e&G6p5y70;@rJ#gz@YUUR(T|$+ zPVU{Ws ziU{mi4LPkJ7tpi;UuNz%9WT2DFIxxV3Vlw%q%+<;H~G%)pK0W-by5FO@s$gir#H5R zUVm!{XwrGp`o5WzRI?H8-Z!_ld2KQh17Gf&N36zU=iAupYx%!)zxS$Nj@h{lplZ_( zUv@ZWrlc{NojNyf-j(_Qm`L8+Q9O;*80MV4NPRCf5fU^;em*%q^%igU4&wtrq?T8_WI`? zOw)?_w%ML<5ZIk3Vh{b`KdC@qJpw;CWRmH#!A)B|G7jC0*t|MdFq*14Zk#dY;@PW% z{a*2SgJAdMZQy74(!t@0YXezs04iff{_$x}M*&Y4diyFsx4gH39i>bETs{K;uEIIX zk|Qpo85i)|GJ%9C4DH=<@zeou*~Wq2DM0aDXjQjVrzegrzMu81Pc!@<)?h zrmM|<%kuj!>}|)4wWh1t<@3#+8MinZ!|;hSt6ip)^e)O>>@>N0v2^#OS*z%F;OvPF zbFU9ob7QN!_hf&Ow%hT?1DlEY9mVZmXxi{x>=$+8cRGrB%8I&F#?sF}?cKbxEle9b z$9oru?@wxF+ut!UIXt@YEr4Uds?WXubms;E?0<0?&$vL?mgz*;CQrll6z+Z@Go>uo zWVuUR+GmcfA2y|=GT@i@e*_2#IDr0XZ$P;wzw`5Y^*>A2w(SLCY^ggkY4~mODm{4R z^S_(kzf~aw&YK&ny!L5hm9NT(oy>Q#58bo_VD=JA_rw5H4B24EtoL1aJ=vXUi=S`b z9e&}<4*SgrAmm8&Yfy z0&zd|JV*rt1-psOZ{O;0%(M+Jp?<}D%k<3FjJG$tBZ${+j=#2fI{;w*;YA>#=>e3| zo$F6eegHTh)SxRe(yIXriI(W<_9y*_D3Gv|BuQ1U6bII_Ya&m+jRqg6x{j@@Bm)|paR-w zf7^`M*nIrp{>#&odH{#f?Du4v^KWn3Ja_h%T*cCT9((YnGcY9Xk-Ju=ai@WWbs*O9 zaJWBE=xe?O$cl@TcZ+1U*X&;Vao1+z7bZN>QPH{HTmN2yrvm`citV<$PHlW_0deo+ zEt`2qZ+7}>_Bap&?r+_dY7Zb`j&6PZwM^Q^=$$Vu9=!S5hU4{V{_*)lpa5h4?R}HX z$jy+plknYWv#qyxEx&J?%JuB=zWlnWIRF6q^dYbdm=2V*u~#$yO}A9f-;k}(xc~93 z4(#qpFqUdOcwlRpA21}^*WYi+Jg^Ix`|G6_zb<+QC@h>fxM}ML?>w2R(%*g9clyIG zI*uOdKFG(+gFgeH^@yi$m~uCE=R)2e$n4Pkj%<1U`lfO}fI?CHrfsJ`E&h$FVXy!5kg4Z3fO`TO?>*R* z34C$o+V=g`fE1_zN*)J-58m9nf&cfu-tjBPZ6Mo|`2OyLufhQ&j3d3CzrGo39aKOPC3l5P|L(P*-{?{}&??t1IctMdT&;O^%`?pLFn5#&5{h=->a^By;IK014qMA35(jRSFCV@y=ObMIHp^zTLU_ZTq7@ zK?*&+dDqn=zfrX)^V6-F0sjEFCqMT6$kX2*>d3tM+U@AAr@sY2ukvp1*;=+?CeY4( z|5~Q}cVz!>3H@&g{U)#e?-B|C?^*#uPDQ}yOxAm~8y^W-0MZ8HWaXZK_+Dz)4Gxr) zBt1&;rcpFa)r*!9hvQeu$4c)cG9B}1inry~Uigel)I3S+t7OkbB*5y>=`Z$}S^-=+ zVzc<_%MRr4Ygc&ru(>JaXa_pkX!Xq|(~9$x$G2QF93J=EVwKh*)r*xTF9EaF05Wdm zIkeV4&0l?Q;WK{w>uy`j5Pd;IPquJ_a#1@wr6!T zGrdEKXJ%gd!Qh<{kmvmwZOJ<04VdM7tvI7Wdtw&Y6KvyBBy`W#^<$un+pjg))VmSm z*6yuzmdq{J*V@UuX^i+H>SSR%X|_-IM7zhKoNNYsh+$2Y(UR8fItbNR5>ax-ow41o z^A^)E+o`!lDoO^|ac38x$N-PPJ;Rl5bpLoTT0rph+6i`!cmy}^6Qw+aA zIQ*`gxSO`<;|h6p#fnMACwuGp;5k3(gLFQEN7pLRqa52=S_x3~?-jyueOMVxPD7r< z&BgEX;BrBn}MnUPfs+F@>OkmzlfEZS5X<-s^%m5 z6=7kyMoBi+`p`~;MQZoD){uM){2*ldv4fl@N;v0oNz_$#lLAd@dxNc^;O!MXl z9%`t3g2kcYPr#@j{=iBXe(Nf2=UA}2)Ycm@Hx2uVuvda}2Z;9bfGP0weC&DPiA zt}>4@_Rrv%Q)ybkH?4{?Uh=$K^cFAIn42!P!RZ>g zED#N$4(ay8o)UL+*Ru7t*Zv;wB=9`G1x6hn7s=Kvfg`%MX~%A(FU|692&|>~dQTgj zTTERPF1SMdVn~D>U;0i(6gAzc#AQTXYyvC_;qFt~nvR!Q(Yn_T_4V7gm+|b|W_=yh zJ`r-vi@sQ89~8{=?(V@a9xhj6RI8k);1maMgWwb5YD|4G6(yYubC;Jna&MG1Kyco! zxzLs2lf+Bx1rkSR;}_i@D!l}9rR3IPI$vRcG?slwQBi50Xg{#+dfwnONf~ECh}E?U zkU5pu``9)g8>OhXUDlT9L9#`M5BwxMwMqT7V&(g+M+hyShhzolMrmtt^YTa(MyH?GK5>?kTo zS5#0$L~7{D7F3#uf^;Q9z)*zHLqK$^NQ+4CC;}oS8)=5V6$mYf)R55BKq3T)5Fm#7 zPJnaX=RMc?2fp9y`7IaRGqc*vtb2XdT6h1-Q3aOOEivSbIrGl=ZDsba>!TkudfI9o zR^4EPGEtqL)U39n{llnRfkl)%4Rxf5GWrOm5UmAvk6hTo=pNO%PI>mq#|{ZKVSl;R z9%qo4@10(qVo4Dd%df2ex%zBE-X)Ynk~bS1ucoI*R%ki6ql)VbJA^9}5^25kbf|L{ zL%fFhBewy!Xx4xuu1#v}ETE?j`FpWKUPo7eCNw15B6kO}MW`LgS%h6cEe)EOphC!& zS>&?YRBfs?_G~`7`*l7-rKeb3zrO#McpL3nnycr;Sx=O6sc)9caOH?9K6=HCSzVM> zDU7s(Pz)+lndJrxt^V=HRPfpFUwN4TqQVP{c6CC?n zS;p4=t(*j}CH4HHI^<*ry!7gwEFbqmP38U}napWBQ_4eQ<2sTSU8rEaBY=94gZ3_) zX|;n#z(kh)mGMgY`}7w}{QK;&WCPBGI$bH`j$)Q&vw6C#V`>#!CJ%+UYtKMlPfsU#eeE=u%l6 z74b*cF+i8FxM(ci4R5%rn8wjm+(`kq7^s0wKWgn@zxJ^(H8Tr!-*;{4V`W3br^2-< zbzf!eG6=>*6^i!G;^bqd9!8D5U7&P_HffOxE6dZpXR$O{-Oyinp5w-WGAFiRm?dUfB`S;(N?cSXmWU6(Tt32O6^^ zlkMtMsDDeVmWH%w^^d;_Mo1ut0Q4U$p`}+Vu6Ly}MHWDHr7Y+s1d*wS8;PJ-hsLxn ztRKxXbGPC24HbP4uyY&4FGXpUu~LuwK*)qTp|hrnJ+&?UZGp?0g_hz=wIA!$oo8S( zv2*;a;w6om-r)`+vUqJOG0-VpRC-slf3z#Ll!@!bFZcP82Vu8uCf}iP%BpZRgJPtc z#Dc6G6fYEiVc%1m=Kj%w2wHzElRw(oRXHrW#&KM=NlGPnEC=Eh zydbpJQ+Q?b3N9kFjj#YUNI64;kFkn_8;CzVm)r)al4SVhea9K4N!*S@tpU?b`f zi@Oo^z+&2R@d1rl_j9qZrGwF{^&`vLw>H6YBtHk0&Vh}56d8WuYED8F7izdyQ;yWI zFxm@2c@FNF+K_iGoi<2K3cVbH(5wlqRp~H*e=JlnV9PPtIcy^wZlTl1-JB8YY+2ZU z$2aR>dT9NFVuT#lo6a<>+aVV{N(O6gTwHxKP+&stI0>Gbpz}*^416b?nm= zzEXaMP>@;4rb}Ah#JloHl44H<{ynY4qPev;7ZmiXENBxdVm+hHRx@k_H6T9J&NpRk zfn}3(#)HLP+El4Un>8fqDD|kaxAaWA zy8vP1<(@U00h@NO>60}@U0VFP>+ymCJjGPdRH5iism;>EkVP+0@?qs4Z$l9iGqjYz z`D-rZB!BovMVREAVQl~k8dVg}M0gI)XGIs&HODYmH+!2luO?mVZXb_3I#4ZSk*Kq; zrpB5$iY2UIoSEa>oHBe+ak^O4+`lr(dT{B%dGeBd2dIt1yGX!%okqkI$G?KsY|0G; z0&5aoS@ivVi|op6C@#P>czV_)i|tZGIb$=ttAbN+?w;dI9?YYqQ||ctWrWyhCiG0j zc|D-pBI;wf3K(`XUaB931^TOi7DRZ_8c|K1!NT6-(Q&8}dM(R#!aPSrS~c3CuL&Nx zK*rWya-JcFGul@j)7sBgurJ=w0jWO_qV|NW20fMK>y~DXo^pnRTK1Ip6~3%O4n%cA z)ZI$&;n>DhP#N8Vk!Z8cK(9w7F|EHB7t6NrprfD=tp`oy4Jxua-$9qnMq&`o4hTZf zk^I6wNC3u}Q=O-U!hgZa#6bOm`bdij74Vy$#U(Lv2MQKCOXt%2W;+Lv<*fS2mC8k@ zywVB>J0pC)N1bK2{In=$QE|PueYVJhA?xR!LOefnO^P`h>(??lS2iQXcIbblFzQ}p zU8Pn$cHRR0%lhSg5oVgVL`Chc>hvNr5_-mF2|q*b#IvoYVQV=-%A~%sibPK5%nth9 zz`ziiM#n#ahIALh#e!HC0g-DO?U6ndGi8}yqbzUld$2NC+%D9~h-qpG^=7&$$!An& zvcHI*nNS^MbmDz`LujMpy?s*4WNTWn7r3e_CTN8wNl)pyh;pa>|C*4jhQQc5hoFnj z=4mT|s4C*H+M4li9dU#D-}(;c3gX_gUff;#yIsC%>1x_iMrccNbWQD896U$#Ql%Tb zjOc==SQ##uu0DV|FffY+o>^uduKqMNR*6@Qwb&?IZvpg(sHsfQ#@^`g`hcM=vF5lY+48SpXi5a<6op+^7MmdE$@-x&_C5cexO@9-=26J}xhfGV^h0 z%}k1(W+XYVn^PSgPeAczS$^l*6wtvbzJcSlaM1Xvate;H&rxO&y)#Qo{i3l!?(1eA zvch&AHua!hY1k%cF>QR=#dt(uLMF2$!LrgSlTg~9xMl+(%$w86P{jbhbc1{+bc-J@|t&#q~Ux@e?;@Kp=bR zugX-e>chJ$2r?xsA=d|l{<{;3gli-P%jHgJGY-!tyWPMNz;tQ*Ey6k*&Nx zTWUSkh7vG(#1H~Q#3+grN+U~~Pf)cGMVc^QO~u6zelyRoCG)=EH#1uFI6W%>L)nwLhbqPz-43LfKHahl+jiKf*cIwJHq77^Th- z8e%c^|2i;e;K1A}RA0MtJFB>-U4AXc;YmR$yQ5b}>0y0|UZB4mTo@HgP@2-1L(~?@ zl0n0&2GA62g6NG{Db^E(AUGnipaY_cYkZEn%FkPiK zFhj269^_NMWtVKun@eH)kuq#0l+uU*Xaq=M|K;P%?rT+L-552kcvL)(Wy;cTGo*agExA`{U5I{`k)?O$@Und&1Qt~X zu`xGdk06IpX`E{fJ@^4-PfP)(IMQYs6t33ak;NEPW9u)O=*up`G)G~*^{?R-hkz+t z)F@&c4k@y4Qj2NJ&htfq#op~gpxk~qH7B62(%zqagC>O{N2fwbK@T)+3|C`I==wyx zq%zc&rCAmK69fd@zi^z|y5?7nq&EXM2r02sCKy zgpia>}CA`%=hEC-oY^;De-ec1vq8Y_{7&$z-%v!jd(s1zMw( zSde|aER{v8yo=T%a;$q{Y2%50WwYm`;A$|An0w(+%%p$FkBa7MmX`W@L^6_nkz*-d z-7ik!)G53sowuh%w}8f3gAcXq9pGBs6A$|+_9!wet=U+k@6>1!D0dp_4&5+j0=ECh z0wuqGu2}i@GR$7Ke(V1H-O!LB*Pr5WyU@Yob-!S0!x2`;>4i(r%0uTnE<)9Ifv%c3 zhX8C|DGlz1C!5sUTVl|pAy{z<5k8sml;l&)iOq7q6w^ZX#xJF^@AklMeC&8em8gn+0(dShtl zExh}B*`)lr1!P+N?J1^&?yw*{)MGDhDYz(mg{l#%*CgUY`D;o#&fp_`7-4>6>yrLxAbFo*a! zwj%Qmk(91TkU=Lv;2I@L3vIuJ8RqVAZEWTmL5k(8iNRLs_w`pRO{;c~c3%sfu-tE_ zQU60`)XiRmyGOh(C6hC-iN2&%Vw8CWR~2_W$Ct9w(0B*({*-Z>z8E2teMOGx*4o0gPlOL(%Y z54@nbbdtQ}V-4$!1b!5ub^Ctccn0z=W;w5RUUj&!=KbE`qijLU3gUvYw>`BiwLHFN zvQ^?-^DiAtYOK`PU9e}1p8?* zB}NIvb5w~K8pHP8_h6-wIm?)sJ1h+LL5OwP==tZt&VfZUSOoA6C|SAd;U&L=OkqU^ z3QZ&R^%CZor2#+B^ba|VDCn>i#x?^w>3U%`Sf_xvqMTSdPE}*zJA%#-3f8c@XdN>LemyO&#ZQia@q;C#q=N_JDU>n2+@mX zwK_)u-Db@7{37V0)A=D|jGk}@vocDFt)Vp4N(VV!anv*K{^&|> zPs2OVl{gMG5TLj^;1+#rt25OreK7+umaA?(6Bf%|zAfU#4ISwZQSVGhCo-2bb53_i zVQDCu7TXpNfofStmfs?>?@YrFD9;$_)ql7y!FXt!#;Ax^>Z@ESg05d6S;a(51oR^s za^W>rS)i2>tfbgmLou(ssc`UV8PtlIqJUslBbx%Ff<|?!aplqvGYiu%^~-MeC&%*bMSi?Ni=0$K-o+${ z=B`f($p+rrv7JxzMWg5IRnBQ{#EYBH0LqP*=U7Qk<#j@_Hj9A?-@%s@*7xU%vMXa7 z=pQoaM{`Q1qiv>hS3%vgLyrUad_o^HdKMHog*q7s+i>XBe~FH=>ep1S?Q@F@DP3XS z^g~YC_N}VwD*jCilT+ z*L&F6juTeo!YpBfEamZ=D9g!;Xz#4?3`j{Y_JeG(RT<%YRmys71%M%zAtGJ@ew_vo ziUStD&hYIYt59SLxYmCu%kZtPPY7~{h7`n6i^@V?yvx89R#!5E7@zV89?V)__V0`g zy_S$gL}DqooDRc9^BE}uhj=718#N@tQ30QhYztdI^jl_^qni+TV;}F>NJV*oIq^sp z(0X`|sC1ulFsQZxh#5y<&2XKY*L&kaX`1 zAMXWjR=(5E-TT7OhTu_jjsJr+FED!1Ycay;#AYG^*_Cf78st(8`=tjl^}$m$(`pM> zzVULCDz+Wk&F2PEGTe6Xckl;DZKW$jzlz(#YtKzRIGwzkC#-`@&ha1~y>C=*IuvzA zEG1MFj7Fu`oiyL1%g9fd$6pG7zExuRBQ@Mi7!M7TI)1e4ej z9SPsc!jT$##}k$WGHQ^!ouBA_y96e*d!E_I|N9eobomp~fF|(*jFy05T90hf6_@e6 zaTp|C47=kB*dE}in!y-zd0rE44#jT%*vE}Kfdo~ymF^2uJ-NyHC?BLi_uFZZ;9>tH zMUwwh6qqpSjr2uc?F-;h?U|8S4vvdpWAB>Bq3`((B1QCUC;i}1ltvR#ms74z9FPa zmHWAuFah5BU{Fm6wp|r~_|ub{Z-N7E-}Z4b*sY@@qoYKlW#Rw;>={&H&XRjFfJs@v)1Y&l<0T|C{hY)xdsO)V~~cXIt1JuS|FaENUlT@5!&! zH`o^h>)Ka4!mPj|WqGitl;yrQhqh#$4>@a9~gloHkt6!oswTO!3>L0UE9K}d4n{S*k7$8 z1-&IN?%h7;mU!*HG;hhFY)QXpkuu4@mB7S?FC^%mCv`7Ro%#AcO!p5SE%8+3&!WeB z4)G^PYztJm^XcL-o_s)wMs(C}P_y!fLjN{i3gEs!Q$bkrW~46q1x;KXK6~W=uk#7r zO#I=qQ-3uo1Ex#7pvQO<&*)xuoA^atF$Sa!^9n2Tzt`Oq)6ttJczDbYNfPt+2)11L z#-$`bR$)62f3{?c@MB#hm;vW4Y3v3Q z%ZY`gH@pN)r|IVG>baw~8 zi|OMhcabeFk9Fm?SZ>}6;ZJ@BST=n3w*U``?#gat2$r4 z=gQ-DAn||>Vf0TRHsfrw9btQTAC5DQn|~6_deeLtJo#0s3S-rN9&C#}U{cHX3cGkB z1?~yd|K@32rSYzT{8`n1?-qw{Hpr|AZRIR5NQU0Ypp8lVZXohARUNb0ySTKpPFq(> z?c(7dkOtb%IwS>l;9uL)e(~~op17Ivc(0>Q$Vvb=O~LlF$t;OgS(C3VSP7kS+H#gE+_Iv%5o-(@%F`{|eC*DrkY z?WOiX634WM!gOOFGn*OW3ZL;^R9fg1qJaJPgW7XI-ci^IV5zV3T@Rc z1JrmF*pmGloGsuXyYK1pkGo!>_j%;@;>Q@D_wFU79TNI(`Jr{%`TNyhpLz9Lr=QJI z)p9PyB6qFMepG zD*h07vEmA>?#3$)|K0S*s{2>uN}C0FW*UqdvAuJoSIn=om-u*f@A2^DjCZ>%8u4E- z`b#%)8#ccA^Ho00L}1gn=f`l8k0iE-7Cw7Bfa>g;KhNXu(bz_U0WD#(8Z$USEgIg- z`{ckm(ZH*ZpI*0aKMGO=j{!@qxOQ?EIE;T`;K9J6jISzy^Zrq~E_$WwX_+u5NYfP_(?MKM8EnwP>jGelCqGs_e5Fyf`ItT5r;v!pp~A)2 z{nzAu6&!5VB4}rY_C0U>#_Ko|6_gdJ@_O*8yiBLabKTA~4QBI~ZH@eV=Z(Ow`uq>h zm#_B>zv>aUCV0=)ksiKJLj8=igmz1EwiIWnyziM##qOqHRA^|Z`A7_O>w6PxCw> zG+|lNO1dg|+Cy;nJB~WzU_k4%WBA0~_(pSYWY@3nD!d=SIb50F+J#g61ptfbIV) z_CLy!x2n)C9s%~I_iP9B-_=L|Pc5f4_fAZ`IQQR5N4N1l2l8-x|L7j5()b@GJOBEr z3kSQ^T0md>zy7!JCg5rR(F0LiovZ$Xau;y@--Ul}{ja5^{6Z0cr#GL&6#hrO>Hqjy z%)O+Hg*gA?Pygem9=P&UZmV6~`QK_(O#>9~UX#AN!BK99#e0$2EeR-y4Pgo0_)$uaf^mrvIzt|1iM+-O2y4asGd?lZBo( zf_!S$B8yH`73&LL3HYMJA0QM4Dh}%=#uwMtW_oa92_0jL)Xx!mfAE;LPb>M4Ut-g) ze8XyLh1GZ`JsFrc{NrC=#a!Uy_NS8&ar68Ljg`zK^@N||o0raBm{InGgZjv|#dn{D zu+y#Hwl5n6ylu4ALd7&j(WrUf82*4go0FMa>IWDoO zv&K~3a&mv3P4s?tDtsG;#TzfO)OW?R5i~$Fb7EzXEY{Cq?Xu1u&boYWekc)jVZ_TJ zq$TxBWDt0;UY_xLQWbkPyf1S9>#du{{LrtwC;QpA?|&Ic+uii}Ry3Y1MvEKtzp0O# z;??hTrao-imi`%#BfG;$0 zJf_cKhrv~sr4L-dO0lIA2@&@}=D@u)-RSf;l9l*m`z=nWd;vQYmY%Ls8>wICdG(1I zT_9Uy`?p5u?s-DS$dZJ}T~Vawb~>T&$|c|h${!N_Yx7DCKK!dC%yB?kfFSwk`D27j zxdZH@-PmG^%cWyHS-k0b{`AU)*ELzC!0&(UysVX~u4VHUi z{#=VZ2p~LKv~fJr^f1=oz}5^mc}9F#Uo~AGUxZupthxGNJYU8R_FNF#Cz1XbQRw7h z-YFh@=-x;?FJUC3Tqo4kG7&AS3WF1LQ5T|n*o4&IW(YGt$aAEZ>#Lj<$$!n~7*ToMKo zJ4P3Z1Z7WMmOTBa{HM|7^QUphEn9rb_xnvsgT!KX_K)^Qyydg?H9w6!4p!~E`13xm z?WYeJgT-Uo`*dbNB#oz9UV<~)cQ(B4uaveef=hS*`l{&{{P)ap-k7yh)p=E8e#WV- zPjoYNBZtoTvx=~L!cxJ1?xXfNDi=uY-Smhq3Weo}Jo%@yMHGYyvJEm9zC!NY1mTpU zeC4f*dmNAQJmuq?;#W(pInv~(+r8D{FFB>LYyUr;plK6tkigGan3m5XEh>a>$T<6I zrWt>YM#N8>*vHPsELM2v_ye%3 zA9GJm6{(;t&lW8W)>#j5QR54X;|&_KI+!6{dQp_8M_^-}Jcnay;G7u0@JH!xo@Xz& z#Fxlx?MAbqM7r^HUfUD|)a~!%JeW<|>|gRBPCSa*SVzGk>dEFIpqZ!}iC=-<|1QLP z;qZna?3=zQ@fjIz(3a*0x%c}|-gqFpGkb*cw$3VRE4&>jKuNlK#hbsAS1Dr9&)`D>Jtr=Tg&hmjzwB1+ z^!Oiv!#9B`rd@o*4R&yk{P)+ho1N?PP_0iI1;FcNnYeluzicq5O6HN9i+VDU!UwcW z;*;?|8*T^1fv5`iByEK9l@E04>^u;on|*kLIM`|1@UTWDUhA>rf0Vj@M!yjNw*pv4 z?Q!yft#vMQ*LiN&$}P;iOo z*#|b1s(wC}XOp(%4gHi~1iFvjq)p_z-g(~Vr#EE1U&Lb{l7C~!u0M`gZrPh9R|F2F zPa*sPmp1hi2Mzb+Z#}eOJKpcGJ3)|ztJBF#SGkc}?xCp8Bd5aZx!ib5J0b1S+nqdLHZAn|Zu#e$$s30AbVn3?Ym5|E(CD8| z`*Ht+YI?VHTpwE;0kU992dA}yHGmYeYW44_M={EN`8umhW0N9^m{!#sAo zH_ih)y@^8J@6~=SvO7ig%<0;(S1SP@_EA~kn%va6?eP%6hj2Om=7;jN^ zVmBCZa6(4^a<}CR$3WyCV*XoO`fS*-G|uf^%oBgBZ;ETu zt7Uy*^ORx~OKH|;ZLq#N)Wx@{d!Y!fz1%$DxQ`dao4I>Nber-S-ES6~c3FJlPx9Lh zr5xL@?XmTWX9P@nJD-y2YItQ(;rw3!-+-YeIBJTotP!jV?%M&Qm<|ct*D{?dJ`YIVO-fFT+TfR*+M)|H^qA(LZVFaYt#l= z{pW4@o&)zFzUIS=70_L96poFPZ`kFAZpH5*0Yu^O$aGf7`dVG!{dS2SV6ef=#qiwDbe|`!<6YL?>0ZkIxkUQQE!hP1Lot7TTUG6Q5+6xkL%u z%Iml~p07vkV01MZ$olF(0Tqt~R}<~J-&nf67E5z{74y>Dd(}&uzF`~}wYQIgax8zG z^e>0KDB_)i2T!e!m-#xu<;#?r6Q~Wyqu*9x2wOVmB|`uEAQCW!mG$TYEA!`Hr) zl>m{O>m14~Z<&s}tJ8+D>?d?#*M@o!frWLcnX^Zw#GnY&w@E?ow?$n9ysy93n)t6< z%hR9g?=Lb`JB=Ze>;WmN>N9sAeXRz;-j>$onh^Q9gFDsp@F=8cPhd!BFH`LT>_sxf zBxkk^Bf24_Z))37{d8?6+Av_GH9$T%)8AcdF65c=f;4?&V`tfuLg@>+bu!bU+nJ*n zZNv1QMFy43^vHm@7b(!WWsYF1abpI_dIx(iioHY z;&dOpA6;u1fG=AtYc^p_OorsvJwqvNoYFQu38o`GmHi7!9Q%X*W6r_$H*06Ul4b2_ z*5(ZXF9+jNQ9-E={!Q1W$_dtEW>JwoO-qqWj-h*3{eQ_$ylOD$r*@>soDLHELr6#0 zX)HDmq7Yz%aJ*U7(p;}NN3AWRAGksAWQ&ap`F4chzVqc5&)T5OV46B_&8f;WKL=z+ z>HeLecV4)Ub2se0BRtg&V>22#1Eeo8vLL?f_ZA_(dENIPirpZz;$!f%{{kdn94i$P zqHbY1*7>>NYG%s!^ZQ~3T(ip0xVY~QU+79l#5@#BqL}OWPg}NmI)nx`3(s|BUn`C7 zu|-MJQ%i7O{W)-m9k$fV;@lpsYFL3s%XvH46Pq}>Fjw#X)lA&uLpy#kHxWFF+aQ2 z7pxPEFst3w3j?=8%bTIToyu&3y4=!K=4^j4+snhmU?vqc)IA<3EKBG{ltV74Q!E@< z%Q+vNg{AdjYGSka6?9aFQxvK@HJ}0_-D&^P5Z(zjT6B4_#?_d2np;=dD1c}`#znG7D zEwNlIFD_?;tK+z?XSy*;4@9H!m%9|)XAJ4@s5h87?q=&&fpcV*`Fif4Ur^MD;R}aK zc}g5b5^EMiZ-T~WQBDMUg-|4xPZ?0D$>k1E$Ix;+6nNF##-k|k6_#mVZpt%@SYu6M z6^B*5*#e^t!Rs$?DdSdW>jiMXho}$Wl*)Np3q0h*;2i2aUUtISc=fuz=8SW<9-Vt| zNN3fa!Q4P|+^r@Q?3k`0ZWEP7=}|5XI8Jg*)q%l6tTtOz%#fuyYCwcs8VP#Ga)}F$ z7L=`0RM&!4<^*ehnFtT1g+7=%51oqZZ|Xb7`U;oT%428{Ucwb&Pn~ivDOo7^g$8v- zlhSXqy8=^1(tIVXU}muR+4jJwVev1P^FvxBTlT!YL+QFaIZR$IK7G5wE7x!C0YO^r zvqBRlHv4{Bj3(ACo~^iV=+`^WDL62g(cWjv&PBW(s$;oixrZApFx$|jZKwa5jvsnm zB$K0bW+{E{tLUKJ9Axs^^iY`sL5_3C>D`>63UDES6B^DtAD9?Ih;gB3sEA>SZe#&^ zWbp{Y{ta}hN9#=8i{EP>kH*PJWiXd>VqU*lOCRYCiDnJS=-3<5$3`SdbyS%Xq$VfX zJh>I-o3dvTW_hHf{_%ukq{;^qsIvw>!*Vp7g=kJNu}{H%v>l)BMvSi4kS9L+CE7L! zH=~c-WgFPn`|0P?JN<6kdV3+853)k9wzwdsds37|P~og*8)gN~CkpjCtK=%3V3Gbd z>b)i19;bdo|GK9-aL-iR*%iG6pZZaW(otxRi34&$$UXE<|1}ogmzW3lPm}E1F4(Gw zXZ`pby?x}hATD>WyL|TK=xaHNee#q%2J)k_KE6IT=!Um44mBaVkm25CGWc5DFsO2% z{lMVaz}#aCNi9EvdXM5JsvrFP)Hh{5*>Olj%Myx^qg28SJhVw%w;}3^ZWbb!SZ(H3 zI~bG0E_iJRn>le7S;t=KUTt2?MWF0h2n4i&0PP82^eHUTA2efUhZgfIn-yd;BXPq< zK|kHZY&c`SL06OLEem4fO`6xJPQw!mY7M7G5q()5`N?Unp;J98E=YA1%InM|H$&V+ zov#NcH7&_rBS$eqqXVfBlwdwt$jK+cLOj9oT%v@@Jq#be;u=N!go#(IluuwXQnS)T zmVL!-%CEDap(YmTB1HGlhc=>A*83ikJ;Z~i#5Tw&=D5 zDoB!Fes{diJbwK05UyY=>VwGy=TF*lrD?v$4~YY`r(#;SXs@k zy&4I9Thv`ALy#YK2u9x+Pbx8}>#CB`nH;Xx(YL}PKtqG2xR`(;{JR$fow|~B6Yao^ ztORk=?|$13Q|eJVZm6{AmPa$gl@L@Cvnbbx^a+Vi(5P}K*x#Lm3=A%jr>`H+z?PQT zD`urGYS5IO789Ma%1a29$;gSN=#NMCVg>`W2=c{tL__Y{oQ<{XepLAm$csVa>@f`i z<@niDVh+m8Vx~566+)J>6^N7af;2b^FhyVac%A;T79b*cCtVO@Gc#fq({jwIi7|ma z_P8Zzc5L1@!QHA$b;AGVEo`ZLMjU0e9l7kb?9*E@Fh8>Dn$_aYxd+J%r6u>A4y>*h z%#TlTk6*i0=00JIPO?XzCDTeb{GTWz-K>|`WkRb;Ri$y(>ROa%F5K`1(;WLLo|aGo zz76d;yLOZbf9+f0P7b+46QI2`ypSn}V!SEK)pf6v_ zb7)P+oNx^7q&g>GZveXMsju4oko3)5+(<# zaRDAdp%E&KV1`qjnSSqaOn&8bT@^~bTYGo-#+F-||AbEoo?9w@bD1$B&-%DxTU$Ag zv`^K^%%NJ?2c?g-o>9e@P6Z6F8lV$nDM@8#DX$qi4-Kx<&C*~((iTB|ZZUzzDRoX{ z>Ef5x#*CZtZ830_d_8KdxUYB`{f3EKmve?^-Fc1naCPd!dul6o+K^<`mg!=ser7={ zUxy-RWk4GniweQpvZsOrn_C`FokcY8&;u2=saT^dZQ?W9uRWDV31b*RSn>fe=-_KH z99cCQU3|(CI+R3s$ugv;GAvzgXLKP|KX(?C$4UTyQ3Y5a_Ac$Do^iL5X9WdbVE7Ek9pdm zId!L(_w8Urel%niE8?=1r2SW;2tzB!ODb9wSy}(A=crJCtO}=HW)gh@z`aNz^$dMg zV~c6&lXHoYWov^mGBu~9CeKHctHr6OvQ9G#fNWN3>K)e|J_?#vH{OVV@^?c@F-oD# zrNpR#s-wZn@fX_F8Xg$fz7v*6d-}S)PY7IvKUTT}+eW$r?BqFzr-=$5GawR0%vK>Z zUThj2lY4C1C`GKxQ*6SP5jlqt)r^q>ua{w6yMHQrF%*$L%Qgd+)w{&|j?$My90`*> zD5;pXHvDj5B`u(K-gPL}vNm*LsIr9t8LTV;YH`itZK8Ud(eeoi8loA2^T@u7AJOXF zd7WV|&-ki9@?K3Wgi_vSN(B}ueEfp{c$%Kztl_Pa=UrUZBq`<)6sixq-tv>w{PT`U z0h*R&pT(&|RaeJQgE6vYmRa6sV%kc(1MtzalvL_$YL1tIy@J25hYfWh-Q5PLrMyN- z8;I+y9s2w`#?drGb^U6O5~NDqV%xG!wP4sb30IZQz4C;Y+ehN^Iau8mT?4YZjw8JM z@*ET>X|GJob=2R?0^5m44(igA@)^jI6jsC#OlCX>MD_ILUR@gG>WM{U1 zZ0|xs@ZTX~hQ&%GYK~YX9<^~Ib^h6yRi;QDqVxwGgcQmW7FK2bvwqcGW2f1648tmC zjFEHR>OU&d>jy&Thx~qCTRtTz?yorUhSft=fz7oAKC;lepo+M3zTpTaSvHdi>5f$y zw0)fvV>)V3(^Id=ta39Nze~T9CdZ62A*M{r=wrNZpj6i9mpYl#z{w&XqeoPD-^7#` z5aAc5dysgq2KsWH&pUThdv$G8R!>q4WNyUo=O@xzLS1R8j+5weW)P6kT82aE%ckP1 z$6lNZP`QEdj=TjNN;REK+~YiNM~Czr#9(#Bz}qz+TieG4_Kc4)>F7k@Xblx>cU%z5 zv4hl2Xfn=4R~&qurb5JXzLj=m7K2xvoWWSEml5HGl!~?g*%-4I1lNtXlhfJ6refj` ztJR?9nc}+7nL{}UlzasH)9p%}ns&YF_>6{`XNgX`MX0(yvuX^-v7r~>ZJ?Hu0)n{N z3k}Ghprn;VvKWbXQ&T4e-i)wJl~rJSR~J{Ek|)Hv)j1BvN#Hy(2IYZ6)gRbIle2>P zc(fhInnu}=$1lOwCKggg2lHjINS_kYVB&%18>{N)z)KZgqg4BYB~)1_;SJknPMLWv zQiU0fR%f?V>nvFzfsY<;fEX^DE<0SR`td9Cs^qgLVh+``B;3S%cu{F}9*;Nmy&FpKU|W?qQ5Nf<^MjR)RQtr!VrHg<0~4j_ zNp>&;xJE-hw(~gDS=otj{F0XHqN~RgF&g5Smsi-%RrR1`!AlN7AsJ-n7Q6?Lfy5qVkEL0u=O>=bmK z;W&RTzt@0eiXpCyXuBGxyZj1v*ydrPRGpV?x&)&7RKCw@ z_2}g=FSQd6e`Djb&-S|Cb(3^>pmxh253oDO*{Kbu zr4|ECG(U`90yhSR?nZ3WO1ld=-c=cpS@usQg`lQAJ($WYHJG}7+;yf)jZElG?DieZ z%|UfcScaBgrf~)G7baaQ4Ab_ziV$s3H4wTX;2s zdMnV(bE=2a=PYv9JoNApbWO_qSKu%%jJD<1)t+^YMj+stv^pQRW?5R~bX!}woSFXx z)ts4fokgpWgq&%kafeu6ACrR_2JVH*CZ)sHvX=8|G)r2HI&MQi3ZdSCi5HqI~4>5{ls;K*ktrOkB zI|$_Q5HYB%9b1TP<7`reON>dCf+QY7)4c!mIQIqNmJ?%;iZjsgxnOf%QCeD6zQTdk z1Aje^wNVMOcQv~1N3bXXDP8yuDpPd2{Ne4vUOxiWZA*$fgE16ujD7|nu>kt zczgO!osEE<#hL8=m#2;Ur!+BchjWNFtY&MrT-}OpHtSX-HOAvl^iR`;t6DaT?^xaJ zk$WHW)=o)np0%bJMnjw_AuR{z(i}Uamb$kqQ3`t&>t*t=VaXdS=pIv_u2oepRhgM! zCp@YLYfH2s9aXJ+sDi7_>m2zw6-R1{g_Z}x>k)L9nHDbt*7xu=7YrdsJj20#&a@Wj zw?pgjUNoeg@d4vcC|gIWH@Gx-!=e|o zF`3Nu+9--FMIj-ct;YtXH=xCDqS2MiTkN{J=Ar@Ow6mE zh+Z0-Efhs^(WYYR#DcHBX?12!)Gd#IP79BRQQo(vJE`3_uz`P3cQw9Tca4SF+raw! ze)3VPbsoY~Yf|aT3pZwN9ANjECjRg$cSoHOPf%!A8!zeavIn4=$Z5xctxpZQ=yCy< zu#}F7`KPhlrrars&y>Qb8-dZ?SIqE#(rGDIe3i%fgr9>5F^lp;vB5YQZ`pl+==tov zTL}QTa1KABvIRy7ALfElE;HV5>+VzEAZNa^xj+#;rW?Y*1b-Pz->Fa31y~mLo?dAl zevhR8;G+!V4{S?hV2f&#bL9EOt!YfSn3m3)%s0EYua@6xurJ67ac8w-9-KZIsIhG` zT)*voS_(hH{ExkLmV?OTC7~uA^d<_^XLmutAB*XC(kN2`+_sFs)cz<<$%&fY2nTNq z6XS-UHO@_pNPgS(M(r=p7gvQ5nVV!r5flnne;qN1ON(turGvOHJFITkg7V8OHbs+a zlFJKy?bno6zHtl>ubbZA<72*tQDJz z-qUQ(nb7x0ZKB}7#^E81JYAxUp^-$?f?8pM{*od}9sqU4<>!7K-2^qI0AkW0+SGnv z`HfA3`gP>aO^y}c-dTJybiT~0BA*M37B-}|4lfzOlHL#XRfC*0J0d(8qQBF2)ytm3B8<`<13fmSS3?Y zcx!~709ZwWazE;+?ygP34!D@@4lCIttmMR1&dE(cXt#>TmSNxEvTs=Z=|I?>-5XJJ z%V_x!%qF~RQ2v3(!UB?yfK32fpX%p->t?ozpY4>W+7dAr*7GC-_ZXBvZ45c=$ycPmVJ$Z9 zN56X>+!&zMnj^4b)RGtZxds+C^yLWOzRR0qZEH8k;xKj|T;4#B84|nBZ)L;zoCb7__MM2=yQjOUlkHv5 z!@Q}7H_+D9=lj0%|CZ$HvBhy~1V|PK%zM52bTDjvlle8A*9F~g-!}<+Aul{6@?--G z>WG@H_t3BpVDvrH#b!Oe9!KGrH{^VvkpQV1YQ1W9N4sCsKIuI?dy9WxeWIba0ql!h zQWE-ObANr}N_hp!T)=1ykl^yM95Pyd8mHBa>TJSUr$z@4VSCUegtZN44APF zA*&u$R?kju3S%3u$LYqFWs4yh8(?`aXj<_Gp-XkrFG4`Rx>5F34~C6w`zU}1YXMkU zT$C;jGW9QkEZU?lg+rUB@k&1MQ1Y95b!EaAaidoG04q6Ia6p!0LnTMRExLEwLw zwn#lHPsXb@wM2>?<9a94Z5pb7kyZzsV$oP8&z;`MSP|F^@u z7eIot-~XAUtS~gqo>v6A({kgH$%dR?ek~J&+cNXEyZp8J&cgn{75R#rv-w392cNZT`<4Wp}<+1>cJ_5t&hsuNeNnff8{%>h925sY< zIp1mfr&)LMT*Y>33j^0xowwVuUWUJ>{Xig7`u{1wMbnE@;ESAQFx*#}X;Cr5PNgwW zF74!7#p&#BzQFAbcYm>aoQvJJ*Y;!5!7mFrEXvQMQ zUg~%q*dKSa`ST)fhVAic4}8C8+<&F8J)TdONlg909lPHSf3?htzK1rQX+Lm#c3o)D z+c)OcnjIUfpk*?*;D@&QE$u&Bof=k87XEPf^-bUM?CmGt0++b|FJTIoUy)rP`L%dy zTmFd#;0pAA%bre^{{~zXAu9`8mCdGnp!stGyvgSu6ypC zhI~x<@|Jh4w}0KaXwi&YQ`@T^aR5v59W!da?(=cXxjMD^KBQ4#!f0>7edwz1kxLAP zb(&0u@ox1Iz1t?;_SOxav)4i4nfLoUeXfQ@-<=F*ZW3vX`sw7Dvof`LGqeD==iKva z@>ct$ANiQhD?ea=HgjL;Qg+sxYnA!0osXVe5z>BO#m?N_C+=Dmx}EyWVc}j9UkBV{ zartA`VuiA)GM}MA!LRtBxAjgU^S=oT9v$;$NpkdTN;}J4^m}RQfhX)jGgP-8%QwS8Nj9e5!#zhw4&^ufXPZs y0_Hr$!%cx`4uZ@{YVZ|mCLy$ZN-l)=;0&t;ucLK6U>d8w5E literal 0 HcmV?d00001 diff --git a/intermediate_source/FSDP1_tutorial.rst b/intermediate_source/FSDP1_tutorial.rst new file mode 100644 index 00000000000..8e5217c64a8 --- /dev/null +++ b/intermediate_source/FSDP1_tutorial.rst @@ -0,0 +1,448 @@ +Getting Started with Fully Sharded Data Parallel(FSDP) +====================================================== + +**Author**: `Hamid Shojanazeri `__, `Yanli Zhao `__, `Shen Li `__ + +.. note:: + |edit| View and edit this tutorial in `github `__. + +Training AI models at a large scale is a challenging task that requires a lot of compute power and resources. +It also comes with considerable engineering complexity to handle the training of these very large models. +`PyTorch FSDP `__, released in PyTorch 1.11 makes this easier. + +In this tutorial, we show how to use `FSDP APIs `__, for simple MNIST models that can be extended to other larger models such as `HuggingFace BERT models `__, +`GPT 3 models up to 1T parameters `__ . The sample DDP MNIST code courtesy of `Patrick Hu `_. + + +How FSDP works +-------------- +In `DistributedDataParallel `__, (DDP) training, each process/ worker owns a replica of the model and processes a batch of data, finally it uses all-reduce to sum up gradients over different workers. In DDP the model weights and optimizer states are replicated across all workers. FSDP is a type of data parallelism that shards model parameters, optimizer states and gradients across DDP ranks. + +When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes the training of some very large models feasible by allowing larger models or batch sizes to fit on device. This comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations like overlapping communication and computation. + +.. figure:: /_static/img/distributed/fsdp_workflow.png + :width: 100% + :align: center + :alt: FSDP workflow + + FSDP Workflow + +At a high level FSDP works as follow: + +*In constructor* + +* Shard model parameters and each rank only keeps its own shard + +*In forward path* + +* Run all_gather to collect all shards from all ranks to recover the full parameter in this FSDP unit +* Run forward computation +* Discard parameter shards it has just collected + +*In backward path* + +* Run all_gather to collect all shards from all ranks to recover the full parameter in this FSDP unit +* Run backward computation +* Run reduce_scatter to sync gradients +* Discard parameters. + +One way to view FSDP's sharding is to decompose the DDP gradient all-reduce into reduce-scatter and all-gather. Specifically, during the backward pass, FSDP reduces and scatters gradients, ensuring that each rank possesses a shard of the gradients. Then it updates the corresponding shard of the parameters in the optimizer step. Finally, in the subsequent forward pass, it performs an all-gather operation to collect and combine the updated parameter shards. + +.. figure:: /_static/img/distributed/fsdp_sharding.png + :width: 100% + :align: center + :alt: FSDP allreduce + + FSDP Allreduce + +How to use FSDP +--------------- +Here we use a toy model to run training on the MNIST dataset for demonstration purposes. The APIs and logic can be applied to training larger models as well. + +*Setup* + +1.1 Install PyTorch along with Torchvision + +See the `Get Started guide `__ for information on installation. + +We add the following code snippets to a python script “FSDP_mnist.py”. + +1.2 Import necessary packages + +.. note:: + This tutorial is intended for PyTorch versions 1.12 and later. If you are using an earlier version, replace all instances of `size_based_auto_wrap_policy` with `default_auto_wrap_policy` and `fsdp_auto_wrap_policy` with `auto_wrap_policy`. + +.. code-block:: python + + # Based on: https://github.com/pytorch/examples/blob/master/mnist/main.py + import os + import argparse + import functools + import torch + import torch.nn as nn + import torch.nn.functional as F + import torch.optim as optim + from torchvision import datasets, transforms + + + from torch.optim.lr_scheduler import StepLR + + import torch.distributed as dist + import torch.multiprocessing as mp + from torch.nn.parallel import DistributedDataParallel as DDP + from torch.utils.data.distributed import DistributedSampler + from torch.distributed.fsdp import FullyShardedDataParallel as FSDP + from torch.distributed.fsdp.fully_sharded_data_parallel import ( + CPUOffload, + BackwardPrefetch, + ) + from torch.distributed.fsdp.wrap import ( + size_based_auto_wrap_policy, + enable_wrap, + wrap, + ) + +1.3 Distributed training setup. As we mentioned FSDP is a type of data parallelism which requires a distributed training environment, so here we use two helper functions to initialize the processes for distributed training and clean up. + +.. code-block:: python + + def setup(rank, world_size): + os.environ['MASTER_ADDR'] = 'localhost' + os.environ['MASTER_PORT'] = '12355' + + # initialize the process group + dist.init_process_group("nccl", rank=rank, world_size=world_size) + + def cleanup(): + dist.destroy_process_group() + +2.1 Define our toy model for handwritten digit classification. + +.. code-block:: python + + class Net(nn.Module): + def __init__(self): + super(Net, self).__init__() + self.conv1 = nn.Conv2d(1, 32, 3, 1) + self.conv2 = nn.Conv2d(32, 64, 3, 1) + self.dropout1 = nn.Dropout(0.25) + self.dropout2 = nn.Dropout(0.5) + self.fc1 = nn.Linear(9216, 128) + self.fc2 = nn.Linear(128, 10) + + def forward(self, x): + + x = self.conv1(x) + x = F.relu(x) + x = self.conv2(x) + x = F.relu(x) + x = F.max_pool2d(x, 2) + x = self.dropout1(x) + x = torch.flatten(x, 1) + x = self.fc1(x) + x = F.relu(x) + x = self.dropout2(x) + x = self.fc2(x) + output = F.log_softmax(x, dim=1) + return output + +2.2 Define a train function + +.. code-block:: python + + def train(args, model, rank, world_size, train_loader, optimizer, epoch, sampler=None): + model.train() + ddp_loss = torch.zeros(2).to(rank) + if sampler: + sampler.set_epoch(epoch) + for batch_idx, (data, target) in enumerate(train_loader): + data, target = data.to(rank), target.to(rank) + optimizer.zero_grad() + output = model(data) + loss = F.nll_loss(output, target, reduction='sum') + loss.backward() + optimizer.step() + ddp_loss[0] += loss.item() + ddp_loss[1] += len(data) + + dist.all_reduce(ddp_loss, op=dist.ReduceOp.SUM) + if rank == 0: + print('Train Epoch: {} \tLoss: {:.6f}'.format(epoch, ddp_loss[0] / ddp_loss[1])) + +2.3 Define a validation function + +.. code-block:: python + + def test(model, rank, world_size, test_loader): + model.eval() + correct = 0 + ddp_loss = torch.zeros(3).to(rank) + with torch.no_grad(): + for data, target in test_loader: + data, target = data.to(rank), target.to(rank) + output = model(data) + ddp_loss[0] += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss + pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability + ddp_loss[1] += pred.eq(target.view_as(pred)).sum().item() + ddp_loss[2] += len(data) + + dist.all_reduce(ddp_loss, op=dist.ReduceOp.SUM) + + if rank == 0: + test_loss = ddp_loss[0] / ddp_loss[2] + print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format( + test_loss, int(ddp_loss[1]), int(ddp_loss[2]), + 100. * ddp_loss[1] / ddp_loss[2])) + +2.4 Define a distributed train function that wraps the model in FSDP + +**Note: to save the FSDP model, we need to call the state_dict on each rank then on Rank 0 save the overall states.** + +.. code-block:: python + + def fsdp_main(rank, world_size, args): + setup(rank, world_size) + + transform=transforms.Compose([ + transforms.ToTensor(), + transforms.Normalize((0.1307,), (0.3081,)) + ]) + + dataset1 = datasets.MNIST('../data', train=True, download=True, + transform=transform) + dataset2 = datasets.MNIST('../data', train=False, + transform=transform) + + sampler1 = DistributedSampler(dataset1, rank=rank, num_replicas=world_size, shuffle=True) + sampler2 = DistributedSampler(dataset2, rank=rank, num_replicas=world_size) + + train_kwargs = {'batch_size': args.batch_size, 'sampler': sampler1} + test_kwargs = {'batch_size': args.test_batch_size, 'sampler': sampler2} + cuda_kwargs = {'num_workers': 2, + 'pin_memory': True, + 'shuffle': False} + train_kwargs.update(cuda_kwargs) + test_kwargs.update(cuda_kwargs) + + train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs) + test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs) + my_auto_wrap_policy = functools.partial( + size_based_auto_wrap_policy, min_num_params=100 + ) + torch.cuda.set_device(rank) + + + init_start_event = torch.cuda.Event(enable_timing=True) + init_end_event = torch.cuda.Event(enable_timing=True) + + model = Net().to(rank) + + model = FSDP(model) + + optimizer = optim.Adadelta(model.parameters(), lr=args.lr) + + scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma) + init_start_event.record() + for epoch in range(1, args.epochs + 1): + train(args, model, rank, world_size, train_loader, optimizer, epoch, sampler=sampler1) + test(model, rank, world_size, test_loader) + scheduler.step() + + init_end_event.record() + + if rank == 0: + init_end_event.synchronize() + print(f"CUDA event elapsed time: {init_start_event.elapsed_time(init_end_event) / 1000}sec") + print(f"{model}") + + if args.save_model: + # use a barrier to make sure training is done on all ranks + dist.barrier() + states = model.state_dict() + if rank == 0: + torch.save(states, "mnist_cnn.pt") + + cleanup() + + + +2.5 Finally, parse the arguments and set the main function + +.. code-block:: python + + if __name__ == '__main__': + # Training settings + parser = argparse.ArgumentParser(description='PyTorch MNIST Example') + parser.add_argument('--batch-size', type=int, default=64, metavar='N', + help='input batch size for training (default: 64)') + parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', + help='input batch size for testing (default: 1000)') + parser.add_argument('--epochs', type=int, default=10, metavar='N', + help='number of epochs to train (default: 14)') + parser.add_argument('--lr', type=float, default=1.0, metavar='LR', + help='learning rate (default: 1.0)') + parser.add_argument('--gamma', type=float, default=0.7, metavar='M', + help='Learning rate step gamma (default: 0.7)') + parser.add_argument('--no-cuda', action='store_true', default=False, + help='disables CUDA training') + parser.add_argument('--seed', type=int, default=1, metavar='S', + help='random seed (default: 1)') + parser.add_argument('--save-model', action='store_true', default=False, + help='For Saving the current Model') + args = parser.parse_args() + + torch.manual_seed(args.seed) + + WORLD_SIZE = torch.cuda.device_count() + mp.spawn(fsdp_main, + args=(WORLD_SIZE, args), + nprocs=WORLD_SIZE, + join=True) + + +We have recorded cuda events to measure the time of FSDP model specifics. The CUDA event time was 110.85 seconds. + +.. code-block:: bash + + python FSDP_mnist.py + + CUDA event elapsed time on training loop 40.67462890625sec + +Wrapping the model with FSDP, the model will look as follows, we can see the model has been wrapped in one FSDP unit. +Alternatively, we will look at adding the auto_wrap_policy next and will discuss the differences. + +.. code-block:: bash + + FullyShardedDataParallel( + (_fsdp_wrapped_module): FlattenParamsWrapper( + (_fpw_module): Net( + (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1)) + (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1)) + (dropout1): Dropout(p=0.25, inplace=False) + (dropout2): Dropout(p=0.5, inplace=False) + (fc1): Linear(in_features=9216, out_features=128, bias=True) + (fc2): Linear(in_features=128, out_features=10, bias=True) + ) + ) + ) + +The following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. + + +.. figure:: /_static/img/distributed/FSDP_memory.gif + :width: 100% + :align: center + :alt: FSDP peak memory + + FSDP Peak Memory Usage + +Applying *auto_wrap_policy* in FSDP otherwise, FSDP will put the entire model in one FSDP unit, which will reduce computation efficiency and memory efficiency. +The way it works is that, suppose your model contains 100 Linear layers. If you do FSDP(model), there will only be one FSDP unit which wraps the entire model. +In that case, the allgather would collect the full parameters for all 100 linear layers, and hence won't save CUDA memory for parameter sharding. +Also, there is only one blocking allgather call for the all 100 linear layers, there will not be communication and computation overlapping between layers. + +To avoid that, you can pass in an auto_wrap_policy, which will seal the current FSDP unit and start a new one automatically when the specified condition is met (e.g., size limit). +In that way you will have multiple FSDP units, and only one FSDP unit needs to collect full parameters at a time. E.g., suppose you have 5 FSDP units, and each wraps 20 linear layers. +Then, in the forward, the 1st FSDP unit will allgather parameters for the first 20 linear layers, do computation, discard the parameters and then move on to the next 20 linear layers. So, at any point in time, each rank only materializes parameters/grads for 20 linear layers instead of 100. + + +To do so in 2.4 we define the auto_wrap_policy and pass it to FSDP wrapper, in the following example, my_auto_wrap_policy defines that a layer could be wrapped or sharded by FSDP if the number of parameters in this layer is larger than 100. +If the number of parameters in this layer is smaller than 100, it will be wrapped with other small layers together by FSDP. +Finding an optimal auto wrap policy is challenging, PyTorch will add auto tuning for this config in the future. Without an auto tuning tool, it is good to profile your workflow using different auto wrap policies experimentally and find the optimal one. + +.. code-block:: python + + my_auto_wrap_policy = functools.partial( + size_based_auto_wrap_policy, min_num_params=20000 + ) + torch.cuda.set_device(rank) + model = Net().to(rank) + + model = FSDP(model, + auto_wrap_policy=my_auto_wrap_policy) + +Applying the auto_wrap_policy, the model would be as follows: + +.. code-block:: bash + + FullyShardedDataParallel( + (_fsdp_wrapped_module): FlattenParamsWrapper( + (_fpw_module): Net( + (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1)) + (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1)) + (dropout1): Dropout(p=0.25, inplace=False) + (dropout2): Dropout(p=0.5, inplace=False) + (fc1): FullyShardedDataParallel( + (_fsdp_wrapped_module): FlattenParamsWrapper( + (_fpw_module): Linear(in_features=9216, out_features=128, bias=True) + ) + ) + (fc2): Linear(in_features=128, out_features=10, bias=True) + ) + ) + + +.. code-block:: bash + + python FSDP_mnist.py + + CUDA event elapsed time on training loop 41.89130859375sec + +The following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on a g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. +It can be observed that the peak memory usage on each device is smaller compared to FSDP without auto wrap policy applied, from ~75 MB to 66 MB. + +.. figure:: /_static/img/distributed/FSDP_autowrap.gif + :width: 100% + :align: center + :alt: FSDP peak memory + + FSDP Peak Memory Usage using Auto_wrap policy + +*CPU Off-loading*: In case the model is very large that even with FSDP wouldn't fit into GPUs, then CPU offload can be helpful here. + +Currently, only parameter and gradient CPU offload is supported. It can be enabled via passing in cpu_offload=CPUOffload(offload_params=True). + +Note that this currently implicitly enables gradient offloading to CPU in order for params and grads to be on the same device to work with the optimizer. This API is subject to change. The default is None in which case there will be no offloading. + +Using this feature may slow down the training considerably, due to frequent copying of tensors from host to device, but it could help improve memory efficiency and train larger scale models. + +In 2.4 we just add it to the FSDP wrapper + + +.. code-block:: python + + model = FSDP(model, + auto_wrap_policy=my_auto_wrap_policy, + cpu_offload=CPUOffload(offload_params=True)) + + +Compare it with DDP, if in 2.4 we just normally wrap the model in DPP, saving the changes in “DDP_mnist.py”. + +.. code-block:: python + + model = Net().to(rank) + model = DDP(model) + + +.. code-block:: bash + + python DDP_mnist.py + + CUDA event elapsed time on training loop 39.77766015625sec + +The following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch profiler. + +.. figure:: /_static/img/distributed/DDP_memory.gif + :width: 100% + :align: center + :alt: FSDP peak memory + + DDP Peak Memory Usage using Auto_wrap policy + + +Considering the toy example and tiny MNIST model we defined here, we can observe the difference between peak memory usage of DDP and FSDP. +In DDP each process holds a replica of the model, so the memory footprint is higher compared to FSDP which shards the model parameters, optimizer states and gradients over DDP ranks. +The peak memory usage using FSDP with auto_wrap policy is the lowest followed by FSDP and DDP. + +Also, looking at timings, considering the small model and running the training on a single machine, FSDP with and without auto_wrap policy performed almost as fast as DDP. +This example does not represent most of the real applications, for detailed analysis and comparison between DDP and FSDP please refer to this `blog post `__ . diff --git a/intermediate_source/FSDP_tutorial.rst b/intermediate_source/FSDP_tutorial.rst index 8e5217c64a8..98258716432 100644 --- a/intermediate_source/FSDP_tutorial.rst +++ b/intermediate_source/FSDP_tutorial.rst @@ -1,448 +1,411 @@ Getting Started with Fully Sharded Data Parallel(FSDP) ====================================================== -**Author**: `Hamid Shojanazeri `__, `Yanli Zhao `__, `Shen Li `__ +**Author**: `Wei Feng `__, `Will Constable `__, `Yifan Mao `__ .. note:: - |edit| View and edit this tutorial in `github `__. + |edit| Check out the code in this tutorial from `pytorch/examples `__. -Training AI models at a large scale is a challenging task that requires a lot of compute power and resources. -It also comes with considerable engineering complexity to handle the training of these very large models. -`PyTorch FSDP `__, released in PyTorch 1.11 makes this easier. - -In this tutorial, we show how to use `FSDP APIs `__, for simple MNIST models that can be extended to other larger models such as `HuggingFace BERT models `__, -`GPT 3 models up to 1T parameters `__ . The sample DDP MNIST code courtesy of `Patrick Hu `_. - - -How FSDP works +How FSDP2 works -------------- -In `DistributedDataParallel `__, (DDP) training, each process/ worker owns a replica of the model and processes a batch of data, finally it uses all-reduce to sum up gradients over different workers. In DDP the model weights and optimizer states are replicated across all workers. FSDP is a type of data parallelism that shards model parameters, optimizer states and gradients across DDP ranks. +In `DistributedDataParallel `__ (DDP) training, each rank owns a model replica and processes a batch of data, finally it uses all-reduce to sync gradients across ranks. -When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes the training of some very large models feasible by allowing larger models or batch sizes to fit on device. This comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations like overlapping communication and computation. +Comparing with DDP, FSDP reduces GPU memory footprint by sharding model parameters, gradients, and optimizer states. It makes it feasible to train models that cannot fit on a single GPU. As shown below in the picture, + +* Outside of forward and backward computation, parameters are fully sharded +* Before forward and backward, sharded parameters are all-gathered into unsharded parameters +* Inside backward, local unsharded gradients are reduce-scatterred into sharded gradients +* Optimizer updates sharded parameters with sharded gradients, resulting in sharded optimizer states .. figure:: /_static/img/distributed/fsdp_workflow.png :width: 100% :align: center :alt: FSDP workflow - FSDP Workflow - -At a high level FSDP works as follow: -*In constructor* +FSDP can be considered a decomposition of DDP's all-reduce into reduce-scatter and all-gather operations -* Shard model parameters and each rank only keeps its own shard +.. figure:: /_static/img/distributed/fsdp_sharding.png + :width: 100% + :align: center + :alt: FSDP all-gather and reduce-scatter -*In forward path* -* Run all_gather to collect all shards from all ranks to recover the full parameter in this FSDP unit -* Run forward computation -* Discard parameter shards it has just collected +Comparing with `FSDP1 +`__, FSDP2 has following advantages: -*In backward path* +* Representing sharded parameters as `DTensor `_ sharded on dim-i, allowing for easy manipulation of individual parameters, communication-free sharded state dicts, and a simpler meta-device initialization flow. +* Improving memory management system that achieves lower and deterministic GPU memory by avoiding ``recordStream`` (`doc `_) and does so without any CPU synchronization. +* Offering a tensor subclass extension point to customize the all-gather, e.g. for float8 all-gather for float8 linears (`doc `_), and NF4 for QLoRA (`doc `_) +* Mixing frozen and non-frozen parameters can in the same communication group without using extra memory. -* Run all_gather to collect all shards from all ranks to recover the full parameter in this FSDP unit -* Run backward computation -* Run reduce_scatter to sync gradients -* Discard parameters. +How to use FSDP2 +--------------- -One way to view FSDP's sharding is to decompose the DDP gradient all-reduce into reduce-scatter and all-gather. Specifically, during the backward pass, FSDP reduces and scatters gradients, ensuring that each rank possesses a shard of the gradients. Then it updates the corresponding shard of the parameters in the optimizer step. Finally, in the subsequent forward pass, it performs an all-gather operation to collect and combine the updated parameter shards. +Model Initialization +~~~~~~~~~~~~~~~ -.. figure:: /_static/img/distributed/fsdp_sharding.png - :width: 100% - :align: center - :alt: FSDP allreduce +**Applying fully_shard on submodules**: Different from DDP, we should apply `fully_shard `_ on submodules as well as the root model. In the transformer example below, we applied ``fully_shard`` on each layer first, then the root model - FSDP Allreduce +* During forward computation of ``layers[i]``, the rest of the layers are sharded to reduce memory footprint +* Inside ``fully_shard(model)``, FSDP2 excludes parameters from ``model.layers`` and classify remaining parameters into a parameter group for performant all-gather and reduce-scatter +* ``fully_shard`` moves sharded model to actual training device (eg ``cuda``) -How to use FSDP ---------------- -Here we use a toy model to run training on the MNIST dataset for demonstration purposes. The APIs and logic can be applied to training larger models as well. -*Setup* +**Command**: ``torchrun --nproc_per_node 2 train.py`` -1.1 Install PyTorch along with Torchvision +.. code-block:: python -See the `Get Started guide `__ for information on installation. + from torch.distributed.fsdp import fully_shard, FSDPModule + model = Transformer() + for layer in model.layers: + fully_shard(layer) + fully_shard(model) -We add the following code snippets to a python script “FSDP_mnist.py”. + assert isinstance(model, Transformer) + assert isinstance(model, FSDPModule) + print(model) + # FSDPTransformer( + # (tok_embeddings): Embedding(...) + # ... + # (layers): 3 x FSDPTransformerBlock(...) + # (output): Linear(...) + # ) -1.2 Import necessary packages +We can inspect the nested wrapping with ``print(model)``. ``FSDPTransformer`` is a joint class of `Transformer `_ and `FSDPModule +<​https://docs.pytorch.org/docs/main/distributed.fsdp.fully_shard.html#torch.distributed.fsdp.FSDPModule>`_. The same thing happens to `FSDPTransformerBlock `_. All FSDP2 public APIs are exposed through ``FSDPModule``. For example, users can call ``model.unshard()`` to manually control all-gather schedules. See "explicit prefetching" below for details. -.. note:: - This tutorial is intended for PyTorch versions 1.12 and later. If you are using an earlier version, replace all instances of `size_based_auto_wrap_policy` with `default_auto_wrap_policy` and `fsdp_auto_wrap_policy` with `auto_wrap_policy`. +**model.parameters() as DTensor**: ``fully_shard`` shards parameters across ranks, and convert ``model.parameters()`` from plain ``torch.Tensor`` to DTensor to represent sharded parameters. FSDP2 shards on dim-0 by default so DTensor placements are `Shard(dim=0)`. Say we have N ranks and a parameter with N rows before sharding. After sharding, each rank will have 1 row of the parameter. We can inspect sharded parameters using ``param.to_local()``. .. code-block:: python - # Based on: https://github.com/pytorch/examples/blob/master/mnist/main.py - import os - import argparse - import functools - import torch - import torch.nn as nn - import torch.nn.functional as F - import torch.optim as optim - from torchvision import datasets, transforms + from torch.distributed.tensor import DTensor + for param in model.parameters(): + assert isinstance(param, DTensor) + assert param.placements == (Shard(0),) + # inspect sharded parameters with param.to_local() + optim = torch.optim.Adam(model.parameters(), lr=1e-2) - from torch.optim.lr_scheduler import StepLR +Note the optimizer is constructed after applying ``fully_shard``. Both model and optimizer state dicts are represented in DTensor. - import torch.distributed as dist - import torch.multiprocessing as mp - from torch.nn.parallel import DistributedDataParallel as DDP - from torch.utils.data.distributed import DistributedSampler - from torch.distributed.fsdp import FullyShardedDataParallel as FSDP - from torch.distributed.fsdp.fully_sharded_data_parallel import ( - CPUOffload, - BackwardPrefetch, - ) - from torch.distributed.fsdp.wrap import ( - size_based_auto_wrap_policy, - enable_wrap, - wrap, - ) +DTensor facilitates optimizer, gradient clipping and checkpointing -1.3 Distributed training setup. As we mentioned FSDP is a type of data parallelism which requires a distributed training environment, so here we use two helper functions to initialize the processes for distributed training and clean up. +* ``torch.optim.Adam`` and ``torch.nn.utils.clip_grad_norm_`` works out of the box for DTensor parameters. It makes the code consistent between single-device and distributed training +* we can use DTensor and DCP APIs to manipulate parameters to get full state dict, see "state dict" section below for details. For distributed state dicts, we can save/load checkpoints (`doc `_) without extra communication + + +Forward/Backward with Prefetching +~~~~~~~~~~~~~~~ + +**command**: ``torchrun --nproc_per_node 2 train.py`` .. code-block:: python - def setup(rank, world_size): - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '12355' + for _ in range(epochs): + x = torch.randint(0, vocab_size, (batch_size, seq_len), device=device) + loss = model(x).sum() + loss.backward() + optim.step() + optim.zero_grad() - # initialize the process group - dist.init_process_group("nccl", rank=rank, world_size=world_size) +``fully_shard`` register forward/backward hooks to all-gather parameters before computation, and reshard parameters after computation. To overlap all-gathers with computation, FSDP2 offers **implicit prefetching** that works out of the box with the training loop above and **explicit prefetching** for advanced users to control all-gather schedules manually. - def cleanup(): - dist.destroy_process_group() +**Implicit Prefetching**: CPU thread issues all-gather i before layer i. All-gathers are queued into its own cuda stream while layer i computation happens in the default stream. For non-cpu-bound workload (eg Transformer with big batch size), all-gather i+1 can overlap with computation for layer i. Implicit prefetching works similarly in the backward, except all-gathers are issued in the reverse of post-forward order. -2.1 Define our toy model for handwritten digit classification. +.. figure:: /_static/img/distributed/fsdp_implicit.png + :width: 100% + :align: center + :alt: FSDP Implicit -.. code-block:: python +We recommend users to start with implicit prefetching to understand the performance out of the box. - class Net(nn.Module): - def __init__(self): - super(Net, self).__init__() - self.conv1 = nn.Conv2d(1, 32, 3, 1) - self.conv2 = nn.Conv2d(32, 64, 3, 1) - self.dropout1 = nn.Dropout(0.25) - self.dropout2 = nn.Dropout(0.5) - self.fc1 = nn.Linear(9216, 128) - self.fc2 = nn.Linear(128, 10) - - def forward(self, x): - - x = self.conv1(x) - x = F.relu(x) - x = self.conv2(x) - x = F.relu(x) - x = F.max_pool2d(x, 2) - x = self.dropout1(x) - x = torch.flatten(x, 1) - x = self.fc1(x) - x = F.relu(x) - x = self.dropout2(x) - x = self.fc2(x) - output = F.log_softmax(x, dim=1) - return output - -2.2 Define a train function +**Explicit Prefetching**: Users can specify forward ordering with `set_modules_to_forward_prefetch `_, and backward ordering with `set_modules_to_backward_prefetch `_. As shown in the code below, CPU thread issue all-gather i + 1 and i + 2 at layer i -.. code-block:: python +Explicit prefetching works well in following situation: + +**CPU-bound workload**: If using implicit prefetching, CPU thread will be too slow to issue all-gather for layer i+1 when kernels from layer i get executed. We have to explicitly issue all-gather i+1 before running forward for layer i + +**Prefetching for 2+ layers**: Implicit prefetching only all-gathers next one layer at a time to keep memory footprint minimum. With explicit prefetching can all-gather multiple layers at a time to possibly for better perf with increased memory. See ``layers_to_prefetch`` in the code - def train(args, model, rank, world_size, train_loader, optimizer, epoch, sampler=None): - model.train() - ddp_loss = torch.zeros(2).to(rank) - if sampler: - sampler.set_epoch(epoch) - for batch_idx, (data, target) in enumerate(train_loader): - data, target = data.to(rank), target.to(rank) - optimizer.zero_grad() - output = model(data) - loss = F.nll_loss(output, target, reduction='sum') - loss.backward() - optimizer.step() - ddp_loss[0] += loss.item() - ddp_loss[1] += len(data) - - dist.all_reduce(ddp_loss, op=dist.ReduceOp.SUM) - if rank == 0: - print('Train Epoch: {} \tLoss: {:.6f}'.format(epoch, ddp_loss[0] / ddp_loss[1])) - -2.3 Define a validation function +**Issuing 1st all-gather earlier**: Implicit prefetching happens at the time of calling ``model(x)``. The 1st all-gather gets exposed. We can call `model.unshard() `_ explicitly earlier to issue 1st all-gather earlier + +**command**: ``torchrun --nproc_per_node 2 train.py --explicit-prefetching`` .. code-block:: python - def test(model, rank, world_size, test_loader): - model.eval() - correct = 0 - ddp_loss = torch.zeros(3).to(rank) - with torch.no_grad(): - for data, target in test_loader: - data, target = data.to(rank), target.to(rank) - output = model(data) - ddp_loss[0] += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss - pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability - ddp_loss[1] += pred.eq(target.view_as(pred)).sum().item() - ddp_loss[2] += len(data) + num_to_forward_prefetch = 2 + for i, layer in enumerate(model.layers): + if i >= len(model.layers) - num_to_forward_prefetch: + break + layers_to_prefetch = [ + model.layers[i + j] for j in range(1, num_to_forward_prefetch + 1) + ] + layer.set_modules_to_forward_prefetch(layers_to_prefetch) + + num_to_backward_prefetch = 2 + for i, layer in enumerate(model.layers): + if i < num_to_backward_prefetch: + continue + layers_to_prefetch = [ + model.layers[i - j] for j in range(1, num_to_backward_prefetch + 1) + ] + layer.set_modules_to_backward_prefetch(layers_to_prefetch) + + for _ in range(epochs): + # trigger 1st all-gather earlier + # this overlaps all-gather with any computation before model(x) + model.unshard() + x = torch.randint(0, vocab_size, (batch_size, seq_len), device=device) + loss = model(x).sum() + loss.backward() + optim.step() + optim.zero_grad() + + +Enabling Mixed Precision +~~~~~~~~~~~~~~~ + +FSDP2 offers a flexible `mixed precision policy `_ to speed up training. One typical use case are + +* Casting float32 parameters to bfloat16 for forward/backward computation, see ``param_dtype=torch.bfloat16`` +* Upcasting gradients to float32 for reduce-scatter to preserve accuracy, see ``reduce_dtype=torch.float32`` + +Comparing with `torch.amp `_, FSDP2 mixed precision has following advantages - dist.all_reduce(ddp_loss, op=dist.ReduceOp.SUM) +* **Performant and flexible parameter casting**: All the parameters inside a ``FSDPModule`` are cast together at the module boundary (before and after before/backward). We can set different mixed precision policies for each layer. For example, the first few layers can be in float32 while remaining layers can be in bfloat16. - if rank == 0: - test_loss = ddp_loss[0] / ddp_loss[2] - print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format( - test_loss, int(ddp_loss[1]), int(ddp_loss[2]), - 100. * ddp_loss[1] / ddp_loss[2])) +* **float32 gradient reduction (reduce-scatter)**: Gradients might vary a lot from rank to rank. Reducing gradients in float32 can be critical for numerics. -2.4 Define a distributed train function that wraps the model in FSDP -**Note: to save the FSDP model, we need to call the state_dict on each rank then on Rank 0 save the overall states.** + +**command**: ``torchrun --nproc_per_node 2 train.py --mixed-precision`` .. code-block:: python - def fsdp_main(rank, world_size, args): - setup(rank, world_size) - - transform=transforms.Compose([ - transforms.ToTensor(), - transforms.Normalize((0.1307,), (0.3081,)) - ]) - - dataset1 = datasets.MNIST('../data', train=True, download=True, - transform=transform) - dataset2 = datasets.MNIST('../data', train=False, - transform=transform) - - sampler1 = DistributedSampler(dataset1, rank=rank, num_replicas=world_size, shuffle=True) - sampler2 = DistributedSampler(dataset2, rank=rank, num_replicas=world_size) - - train_kwargs = {'batch_size': args.batch_size, 'sampler': sampler1} - test_kwargs = {'batch_size': args.test_batch_size, 'sampler': sampler2} - cuda_kwargs = {'num_workers': 2, - 'pin_memory': True, - 'shuffle': False} - train_kwargs.update(cuda_kwargs) - test_kwargs.update(cuda_kwargs) - - train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs) - test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs) - my_auto_wrap_policy = functools.partial( - size_based_auto_wrap_policy, min_num_params=100 + model = Transformer(model_args) + fsdp_kwargs = { + "mp_policy": MixedPrecisionPolicy( + param_dtype=torch.bfloat16, + reduce_dtype=torch.float32, ) - torch.cuda.set_device(rank) - - - init_start_event = torch.cuda.Event(enable_timing=True) - init_end_event = torch.cuda.Event(enable_timing=True) + } + for layer in model.layers: + fully_shard(layer, **fsdp_kwargs) + fully_shard(model, **fsdp_kwargs) + + # sharded parameters are float32 + for param in model.parameters(): + assert param.dtype == torch.float32 + + # unsharded parameters are bfloat16 + model.unshard() + for param in model.parameters(recurse=False): + assert param.dtype == torch.bfloat16 + model.reshard() - model = Net().to(rank) + # optimizer states are in float32 + optim = torch.optim.Adam(model.parameters(), lr=1e-2) - model = FSDP(model) + # training loop + # ... - optimizer = optim.Adadelta(model.parameters(), lr=args.lr) - scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma) - init_start_event.record() - for epoch in range(1, args.epochs + 1): - train(args, model, rank, world_size, train_loader, optimizer, epoch, sampler=sampler1) - test(model, rank, world_size, test_loader) - scheduler.step() - init_end_event.record() +Gradient Clipping and Optimizer with DTensor +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +**command**: ``torchrun --nproc_per_node 2 train.py`` + +.. code-block:: python - if rank == 0: - init_end_event.synchronize() - print(f"CUDA event elapsed time: {init_start_event.elapsed_time(init_end_event) / 1000}sec") - print(f"{model}") + # optim is constructed base on DTensor model parameters + optim = torch.optim.Adam(model.parameters(), lr=1e-2) + for _ in range(epochs): + x = torch.randint(0, vocab_size, (batch_size, seq_len), device=device) + loss = model(x).sum() + loss.backward() + torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=max_norm) + optim.step() + optim.zero_grad() - if args.save_model: - # use a barrier to make sure training is done on all ranks - dist.barrier() - states = model.state_dict() - if rank == 0: - torch.save(states, "mnist_cnn.pt") - - cleanup() +Optimizer is initialized after applying ``fully_shard`` on the model, and holds reference to DTensor ``model.parameters()``. For gradient clipping, ``torch.nn.utils.clip_grad_norm_`` works for DTensor parameters. Tensor ops will be dispatched correctly inside DTensor to communicate partial tensors across ranks to preserve the single device semantic. +State Dicts with DTensor APIs +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +We showcase how to convert a full state dict into a DTensor state dict for loading, and how to convert it back to full state dict for saving. -2.5 Finally, parse the arguments and set the main function +**command**: ``torchrun --nproc_per_node 2 train.py`` + +* For the 1st time, it creates checkpoints for the model and optimizer +* For the 2nd time, it loads from the previous checkpoint to resume training + +**Loading state dicts**: We initialize the model under meta device and call ``fully_shard`` to convert ``model.parameters()`` from plain ``torch.Tensor`` to DTensor. After reading the full state dict from torch.load, we can call `distributed_tensor `_ to convert plain ``torch.Tensor`` into DTensor, using the same placements and device mesh from ``model.state_dict()``. Finally we can call `model.load_state_dict `_ to load DTensor state dicts into the model. .. code-block:: python - if __name__ == '__main__': - # Training settings - parser = argparse.ArgumentParser(description='PyTorch MNIST Example') - parser.add_argument('--batch-size', type=int, default=64, metavar='N', - help='input batch size for training (default: 64)') - parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', - help='input batch size for testing (default: 1000)') - parser.add_argument('--epochs', type=int, default=10, metavar='N', - help='number of epochs to train (default: 14)') - parser.add_argument('--lr', type=float, default=1.0, metavar='LR', - help='learning rate (default: 1.0)') - parser.add_argument('--gamma', type=float, default=0.7, metavar='M', - help='Learning rate step gamma (default: 0.7)') - parser.add_argument('--no-cuda', action='store_true', default=False, - help='disables CUDA training') - parser.add_argument('--seed', type=int, default=1, metavar='S', - help='random seed (default: 1)') - parser.add_argument('--save-model', action='store_true', default=False, - help='For Saving the current Model') - args = parser.parse_args() - - torch.manual_seed(args.seed) - - WORLD_SIZE = torch.cuda.device_count() - mp.spawn(fsdp_main, - args=(WORLD_SIZE, args), - nprocs=WORLD_SIZE, - join=True) - - -We have recorded cuda events to measure the time of FSDP model specifics. The CUDA event time was 110.85 seconds. - -.. code-block:: bash - - python FSDP_mnist.py - - CUDA event elapsed time on training loop 40.67462890625sec - -Wrapping the model with FSDP, the model will look as follows, we can see the model has been wrapped in one FSDP unit. -Alternatively, we will look at adding the auto_wrap_policy next and will discuss the differences. - -.. code-block:: bash - - FullyShardedDataParallel( - (_fsdp_wrapped_module): FlattenParamsWrapper( - (_fpw_module): Net( - (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1)) - (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1)) - (dropout1): Dropout(p=0.25, inplace=False) - (dropout2): Dropout(p=0.5, inplace=False) - (fc1): Linear(in_features=9216, out_features=128, bias=True) - (fc2): Linear(in_features=128, out_features=10, bias=True) - ) + from torch.distributed.tensor import distribute_tensor + + # mmap=True reduces CPU memory usage + full_sd = torch.load( + "checkpoints/model_state_dict.pt", + mmap=True, + weights_only=True, + map_location='cpu', ) - ) + meta_sharded_sd = model.state_dict() + sharded_sd = {} + for param_name, full_tensor in full_sd.items(): + sharded_meta_param = meta_sharded_sd.get(param_name) + sharded_tensor = distribute_tensor( + full_tensor, + sharded_meta_param.device_mesh, + sharded_meta_param.placements, + ) + sharded_sd[param_name] = nn.Parameter(sharded_tensor) + # `assign=True` since we cannot call `copy_` on meta tensor + model.load_state_dict(sharded_sd, assign=True) -The following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. +**Saving state dicts**: ``model.state_dict()`` returns a DTensor state dict. We can convert a DTensor into a plain ``torch.Tensor`` by calling `full_tensor() `_. Internally it issues an all-gather across ranks to get unsharded parameters in plain torch.Tensor. For rank 0, ``full_param.cpu()`` offloads the tensor to cpu one by one to avoid peaking GPU memory with unsharded parameters. +.. code-block:: python + + sharded_sd = model.state_dict() + cpu_state_dict = {} + for param_name, sharded_param in sharded_sd.items(): + full_param = sharded_param.full_tensor() + if torch.distributed.get_rank() == 0: + cpu_state_dict[param_name] = full_param.cpu() + else: + del full_param + torch.save(cpu_state_dict, "checkpoints/model_state_dict.pt") -.. figure:: /_static/img/distributed/FSDP_memory.gif - :width: 100% - :align: center - :alt: FSDP peak memory - FSDP Peak Memory Usage +Optimizer state dict works similarly (`code `_). Users can customize the above DTensor scripts to work with 3rd party checkpoints. -Applying *auto_wrap_policy* in FSDP otherwise, FSDP will put the entire model in one FSDP unit, which will reduce computation efficiency and memory efficiency. -The way it works is that, suppose your model contains 100 Linear layers. If you do FSDP(model), there will only be one FSDP unit which wraps the entire model. -In that case, the allgather would collect the full parameters for all 100 linear layers, and hence won't save CUDA memory for parameter sharding. -Also, there is only one blocking allgather call for the all 100 linear layers, there will not be communication and computation overlapping between layers. +If there is no need for customization, we can use `DCP APIs `_ directly to support both single-node and multi-node training. -To avoid that, you can pass in an auto_wrap_policy, which will seal the current FSDP unit and start a new one automatically when the specified condition is met (e.g., size limit). -In that way you will have multiple FSDP units, and only one FSDP unit needs to collect full parameters at a time. E.g., suppose you have 5 FSDP units, and each wraps 20 linear layers. -Then, in the forward, the 1st FSDP unit will allgather parameters for the first 20 linear layers, do computation, discard the parameters and then move on to the next 20 linear layers. So, at any point in time, each rank only materializes parameters/grads for 20 linear layers instead of 100. +State Dict with DCP APIs +~~~~~~~~~~~~~~~~~~~~~~~~ -To do so in 2.4 we define the auto_wrap_policy and pass it to FSDP wrapper, in the following example, my_auto_wrap_policy defines that a layer could be wrapped or sharded by FSDP if the number of parameters in this layer is larger than 100. -If the number of parameters in this layer is smaller than 100, it will be wrapped with other small layers together by FSDP. -Finding an optimal auto wrap policy is challenging, PyTorch will add auto tuning for this config in the future. Without an auto tuning tool, it is good to profile your workflow using different auto wrap policies experimentally and find the optimal one. +**command**: ``torchrun --nproc_per_node 2 train.py --dcp-api`` + +* For the 1st time, it creates checkpoints for the model and optimizer +* For the 2nd time, it loads from the previous checkpoint to resume training + +**Loading state dicts**: We can load a full state dict into a FSDP2 model with `set_model_state_dict `_. With ``broadcast_from_rank0=True``, we can load the full state dict only on rank 0 to avoid peaking CPU memory. DCP will shard tensors and broadcast them to other ranks. .. code-block:: python - my_auto_wrap_policy = functools.partial( - size_based_auto_wrap_policy, min_num_params=20000 - ) - torch.cuda.set_device(rank) - model = Net().to(rank) - - model = FSDP(model, - auto_wrap_policy=my_auto_wrap_policy) - -Applying the auto_wrap_policy, the model would be as follows: - -.. code-block:: bash - - FullyShardedDataParallel( - (_fsdp_wrapped_module): FlattenParamsWrapper( - (_fpw_module): Net( - (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1)) - (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1)) - (dropout1): Dropout(p=0.25, inplace=False) - (dropout2): Dropout(p=0.5, inplace=False) - (fc1): FullyShardedDataParallel( - (_fsdp_wrapped_module): FlattenParamsWrapper( - (_fpw_module): Linear(in_features=9216, out_features=128, bias=True) + from torch.distributed.checkpoint.state_dict import set_model_state_dict + set_model_state_dict( + model=model, + model_state_dict=full_sd, + options=StateDictOptions( + full_state_dict=True, + broadcast_from_rank0=True, + ), + ) + +**Saving state dicts**: `get_model_state_dict `_ with ``full_state_dict=True`` and ``cpu_offload=True`` all-gathers tensors and offload them to CPU. It works similarly to DTensor APIs. + +.. code-block:: python + + from torch.distributed.checkpoint.state_dict import get_model_state_dict + model_state_dict = get_model_state_dict( + model=model, + options=StateDictOptions( + full_state_dict=True, + cpu_offload=True, ) - ) - (fc2): Linear(in_features=128, out_features=10, bias=True) ) - ) + torch.save(model_state_dict, "model_state_dict.pt") -.. code-block:: bash +Refer to `pytorch/examples `__ for loading and saving optimizer state dicts with `set_optimizer_state_dict `_ and `get_optimizer_state_dict `_. - python FSDP_mnist.py - CUDA event elapsed time on training loop 41.89130859375sec +FSDP1-to-FSDP2 migration guide +--------------- -The following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on a g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. -It can be observed that the peak memory usage on each device is smaller compared to FSDP without auto wrap policy applied, from ~75 MB to 66 MB. +Let’s look at an example of an `FSDP `_ usage and an equivalent `fully_shard `_ usage. We’ll highlight the key differences and suggest steps for migration. -.. figure:: /_static/img/distributed/FSDP_autowrap.gif - :width: 100% - :align: center - :alt: FSDP peak memory +Original FSDP() usage - FSDP Peak Memory Usage using Auto_wrap policy +.. code-block:: python -*CPU Off-loading*: In case the model is very large that even with FSDP wouldn't fit into GPUs, then CPU offload can be helpful here. + from torch.distributed.fsdp import FullyShardedDataParallel as FSDP + with torch.device("meta"): + model = Transformer() + policy = ModuleWrapPolicy({TransformerBlock}) + model = FSDP(model, auto_wrap_policy=policy) + def param_init_fn(module: nn.Module) -> None: ... + model = FSDP(model, auto_wrap_policy=policy, param_init_fn=param_init_fn) -Currently, only parameter and gradient CPU offload is supported. It can be enabled via passing in cpu_offload=CPUOffload(offload_params=True). +New fully_shard() usage -Note that this currently implicitly enables gradient offloading to CPU in order for params and grads to be on the same device to work with the optimizer. This API is subject to change. The default is None in which case there will be no offloading. +.. code-block:: python -Using this feature may slow down the training considerably, due to frequent copying of tensors from host to device, but it could help improve memory efficiency and train larger scale models. + with torch.device("meta"): + model = Transformer() + for module in model.modules(): + if isinstance(module, TransformerBlock): + fully_shard(module) + fully_shard(model) + for tensor in itertools.chain(model.parameters(), model.buffers()): + assert tensor.device == torch.device("meta") -In 2.4 we just add it to the FSDP wrapper + # Initialize the model after sharding + model.to_empty(device="cuda") + model.reset_parameters() -.. code-block:: python +Migration Steps - model = FSDP(model, - auto_wrap_policy=my_auto_wrap_policy, - cpu_offload=CPUOffload(offload_params=True)) +* Replace the imports +* Implement your ‘policy’ directly (apply ``fully_shard`` to the desired sublayers) +* Wrap your root model with ``fully_shard`` instead of ``FSDP`` +* Get rid of ``param_init_fn`` and manually call ``model.reset_parameters()`` +* Replace other FSDP1 kwargs (see below) -Compare it with DDP, if in 2.4 we just normally wrap the model in DPP, saving the changes in “DDP_mnist.py”. +sharding_strategy -.. code-block:: python +* FULL_SHARD: ``reshard_after_forward=True`` +* SHARD_GRAD_OP: ``reshard_after_forward=False`` +* HYBRID_SHARD: ``reshard_after_forward=True`` with a 2D device mesh +* _HYBRID_SHARD_ZERO2: ``reshard_after_forward=False`` with a 2D device mesh - model = Net().to(rank) - model = DDP(model) +cpu_offload +* CPUOffload.offload_params=False: ``offload_policy=None`` +* CPUOffload.offload_params = True: ``offload_policy=CPUOffloadPolicy()`` -.. code-block:: bash +backward_prefetch - python DDP_mnist.py +* BACKWARD_PRE: always used +* BACKWARD_POST: not supported - CUDA event elapsed time on training loop 39.77766015625sec +mixed_precision -The following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch profiler. +* ``buffer_dtype`` is omitted because fully_shard does not shard buffers +* fully_shard’s ``cast_forward_inputs`` maps to both ``cast_forward_inputs`` and ``cast_root_forward_inputs`` in FSDP1 +* ``output_dtype`` is a new config for fully_shard -.. figure:: /_static/img/distributed/DDP_memory.gif - :width: 100% - :align: center - :alt: FSDP peak memory +device_id: Inferred from device_mesh’s device + +sync_module_states=True/False: Moved to DCP. User can broadcast state dicts from rank0 using `set_model_state_dict `_ with ``broadcast_from_rank0=True`` + +forward_prefetch: Manual control over prefetching is possible with + +* Manually call ``fsdp_module.unshard()`` +* Use these APIs to control automatic prefetching, ``set_modules_to_forward_prefetch`` and ``set_modules_to_backward_prefetch`` - DDP Peak Memory Usage using Auto_wrap policy +limit_all_gathers: No longer needed, because ``fully_shard`` removed cpu synchronization +use_orig_params: Original params are always used (no more flat parameter) -Considering the toy example and tiny MNIST model we defined here, we can observe the difference between peak memory usage of DDP and FSDP. -In DDP each process holds a replica of the model, so the memory footprint is higher compared to FSDP which shards the model parameters, optimizer states and gradients over DDP ranks. -The peak memory usage using FSDP with auto_wrap policy is the lowest followed by FSDP and DDP. +no_sync(): `set_requires_gradient_sync `_ -Also, looking at timings, considering the small model and running the training on a single machine, FSDP with and without auto_wrap policy performed almost as fast as DDP. -This example does not represent most of the real applications, for detailed analysis and comparison between DDP and FSDP please refer to this `blog post `__ . +ignored_params and ignored_states: ignored_params From 87b76bd4e3e3eccebd36fd94c2e8508522d083c3 Mon Sep 17 00:00:00 2001 From: "Wei (Will) Feng" Date: Tue, 13 May 2025 11:01:57 -0700 Subject: [PATCH 2/2] FSDP1 deprecation msg --- index.rst | 6 +-- intermediate_source/FSDP1_tutorial.rst | 52 +++++++++++++------------- intermediate_source/FSDP_tutorial.rst | 12 +++--- 3 files changed, 35 insertions(+), 35 deletions(-) diff --git a/index.rst b/index.rst index e4ab3c1d81e..da64acca181 100644 --- a/index.rst +++ b/index.rst @@ -766,14 +766,14 @@ Welcome to PyTorch Tutorials :tags: Parallel-and-Distributed-Training .. customcarditem:: - :header: Getting Started with Fully Sharded Data Parallel(FSDP) - :card_description: Learn how to train models with Fully Sharded Data Parallel package. + :header: Getting Started with Fully Sharded Data Parallel (FSDP2) + :card_description: Learn how to train models with Fully Sharded Data Parallel (fully_shard) package. :image: _static/img/thumbnails/cropped/Getting-Started-with-FSDP.png :link: intermediate/FSDP_tutorial.html :tags: Parallel-and-Distributed-Training .. customcarditem:: - :header: Advanced Model Training with Fully Sharded Data Parallel (FSDP) + :header: Advanced Model Training with Fully Sharded Data Parallel (FSDP1) :card_description: Explore advanced model training with Fully Sharded Data Parallel package. :image: _static/img/thumbnails/cropped/Getting-Started-with-FSDP.png :link: intermediate/FSDP_advanced_tutorial.html diff --git a/intermediate_source/FSDP1_tutorial.rst b/intermediate_source/FSDP1_tutorial.rst index 8e5217c64a8..0c59db689a8 100644 --- a/intermediate_source/FSDP1_tutorial.rst +++ b/intermediate_source/FSDP1_tutorial.rst @@ -4,19 +4,19 @@ Getting Started with Fully Sharded Data Parallel(FSDP) **Author**: `Hamid Shojanazeri `__, `Yanli Zhao `__, `Shen Li `__ .. note:: - |edit| View and edit this tutorial in `github `__. + |edit| FSDP1 is deprecated. Please check out `FSDP2 tutorial `_. -Training AI models at a large scale is a challenging task that requires a lot of compute power and resources. +Training AI models at a large scale is a challenging task that requires a lot of compute power and resources. It also comes with considerable engineering complexity to handle the training of these very large models. `PyTorch FSDP `__, released in PyTorch 1.11 makes this easier. -In this tutorial, we show how to use `FSDP APIs `__, for simple MNIST models that can be extended to other larger models such as `HuggingFace BERT models `__, -`GPT 3 models up to 1T parameters `__ . The sample DDP MNIST code courtesy of `Patrick Hu `_. +In this tutorial, we show how to use `FSDP APIs `__, for simple MNIST models that can be extended to other larger models such as `HuggingFace BERT models `__, +`GPT 3 models up to 1T parameters `__ . The sample DDP MNIST code courtesy of `Patrick Hu `_. How FSDP works -------------- -In `DistributedDataParallel `__, (DDP) training, each process/ worker owns a replica of the model and processes a batch of data, finally it uses all-reduce to sum up gradients over different workers. In DDP the model weights and optimizer states are replicated across all workers. FSDP is a type of data parallelism that shards model parameters, optimizer states and gradients across DDP ranks. +In `DistributedDataParallel `__, (DDP) training, each process/ worker owns a replica of the model and processes a batch of data, finally it uses all-reduce to sum up gradients over different workers. In DDP the model weights and optimizer states are replicated across all workers. FSDP is a type of data parallelism that shards model parameters, optimizer states and gradients across DDP ranks. When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes the training of some very large models feasible by allowing larger models or batch sizes to fit on device. This comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations like overlapping communication and computation. @@ -44,7 +44,7 @@ At a high level FSDP works as follow: * Run all_gather to collect all shards from all ranks to recover the full parameter in this FSDP unit * Run backward computation * Run reduce_scatter to sync gradients -* Discard parameters. +* Discard parameters. One way to view FSDP's sharding is to decompose the DDP gradient all-reduce into reduce-scatter and all-gather. Specifically, during the backward pass, FSDP reduces and scatters gradients, ensuring that each rank possesses a shard of the gradients. Then it updates the corresponding shard of the parameters in the optimizer step. Finally, in the subsequent forward pass, it performs an all-gather operation to collect and combine the updated parameter shards. @@ -57,7 +57,7 @@ One way to view FSDP's sharding is to decompose the DDP gradient all-reduce into How to use FSDP --------------- -Here we use a toy model to run training on the MNIST dataset for demonstration purposes. The APIs and logic can be applied to training larger models as well. +Here we use a toy model to run training on the MNIST dataset for demonstration purposes. The APIs and logic can be applied to training larger models as well. *Setup* @@ -116,7 +116,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”. def cleanup(): dist.destroy_process_group() -2.1 Define our toy model for handwritten digit classification. +2.1 Define our toy model for handwritten digit classification. .. code-block:: python @@ -131,7 +131,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”. self.fc2 = nn.Linear(128, 10) def forward(self, x): - + x = self.conv1(x) x = F.relu(x) x = self.conv2(x) @@ -146,7 +146,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”. output = F.log_softmax(x, dim=1) return output -2.2 Define a train function +2.2 Define a train function .. code-block:: python @@ -169,7 +169,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”. if rank == 0: print('Train Epoch: {} \tLoss: {:.6f}'.format(epoch, ddp_loss[0] / ddp_loss[1])) -2.3 Define a validation function +2.3 Define a validation function .. code-block:: python @@ -230,8 +230,8 @@ We add the following code snippets to a python script “FSDP_mnist.py”. size_based_auto_wrap_policy, min_num_params=100 ) torch.cuda.set_device(rank) - - + + init_start_event = torch.cuda.Event(enable_timing=True) init_end_event = torch.cuda.Event(enable_timing=True) @@ -261,7 +261,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”. states = model.state_dict() if rank == 0: torch.save(states, "mnist_cnn.pt") - + cleanup() @@ -309,7 +309,7 @@ We have recorded cuda events to measure the time of FSDP model specifics. The CU CUDA event elapsed time on training loop 40.67462890625sec Wrapping the model with FSDP, the model will look as follows, we can see the model has been wrapped in one FSDP unit. -Alternatively, we will look at adding the auto_wrap_policy next and will discuss the differences. +Alternatively, we will look at adding the auto_wrap_policy next and will discuss the differences. .. code-block:: bash @@ -326,7 +326,7 @@ Alternatively, we will look at adding the auto_wrap_policy next and will discuss ) ) -The following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. +The following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. .. figure:: /_static/img/distributed/FSDP_memory.gif @@ -336,10 +336,10 @@ The following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarg FSDP Peak Memory Usage -Applying *auto_wrap_policy* in FSDP otherwise, FSDP will put the entire model in one FSDP unit, which will reduce computation efficiency and memory efficiency. -The way it works is that, suppose your model contains 100 Linear layers. If you do FSDP(model), there will only be one FSDP unit which wraps the entire model. +Applying *auto_wrap_policy* in FSDP otherwise, FSDP will put the entire model in one FSDP unit, which will reduce computation efficiency and memory efficiency. +The way it works is that, suppose your model contains 100 Linear layers. If you do FSDP(model), there will only be one FSDP unit which wraps the entire model. In that case, the allgather would collect the full parameters for all 100 linear layers, and hence won't save CUDA memory for parameter sharding. -Also, there is only one blocking allgather call for the all 100 linear layers, there will not be communication and computation overlapping between layers. +Also, there is only one blocking allgather call for the all 100 linear layers, there will not be communication and computation overlapping between layers. To avoid that, you can pass in an auto_wrap_policy, which will seal the current FSDP unit and start a new one automatically when the specified condition is met (e.g., size limit). In that way you will have multiple FSDP units, and only one FSDP unit needs to collect full parameters at a time. E.g., suppose you have 5 FSDP units, and each wraps 20 linear layers. @@ -347,7 +347,7 @@ Then, in the forward, the 1st FSDP unit will allgather parameters for the first To do so in 2.4 we define the auto_wrap_policy and pass it to FSDP wrapper, in the following example, my_auto_wrap_policy defines that a layer could be wrapped or sharded by FSDP if the number of parameters in this layer is larger than 100. -If the number of parameters in this layer is smaller than 100, it will be wrapped with other small layers together by FSDP. +If the number of parameters in this layer is smaller than 100, it will be wrapped with other small layers together by FSDP. Finding an optimal auto wrap policy is challenging, PyTorch will add auto tuning for this config in the future. Without an auto tuning tool, it is good to profile your workflow using different auto wrap policies experimentally and find the optimal one. .. code-block:: python @@ -388,7 +388,7 @@ Applying the auto_wrap_policy, the model would be as follows: CUDA event elapsed time on training loop 41.89130859375sec -The following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on a g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. +The following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on a g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler. It can be observed that the peak memory usage on each device is smaller compared to FSDP without auto wrap policy applied, from ~75 MB to 66 MB. .. figure:: /_static/img/distributed/FSDP_autowrap.gif @@ -398,13 +398,13 @@ It can be observed that the peak memory usage on each device is smaller compared FSDP Peak Memory Usage using Auto_wrap policy -*CPU Off-loading*: In case the model is very large that even with FSDP wouldn't fit into GPUs, then CPU offload can be helpful here. +*CPU Off-loading*: In case the model is very large that even with FSDP wouldn't fit into GPUs, then CPU offload can be helpful here. Currently, only parameter and gradient CPU offload is supported. It can be enabled via passing in cpu_offload=CPUOffload(offload_params=True). Note that this currently implicitly enables gradient offloading to CPU in order for params and grads to be on the same device to work with the optimizer. This API is subject to change. The default is None in which case there will be no offloading. -Using this feature may slow down the training considerably, due to frequent copying of tensors from host to device, but it could help improve memory efficiency and train larger scale models. +Using this feature may slow down the training considerably, due to frequent copying of tensors from host to device, but it could help improve memory efficiency and train larger scale models. In 2.4 we just add it to the FSDP wrapper @@ -430,7 +430,7 @@ Compare it with DDP, if in 2.4 we just normally wrap the model in DPP, saving th CUDA event elapsed time on training loop 39.77766015625sec -The following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch profiler. +The following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch profiler. .. figure:: /_static/img/distributed/DDP_memory.gif :width: 100% @@ -440,9 +440,9 @@ The following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge DDP Peak Memory Usage using Auto_wrap policy -Considering the toy example and tiny MNIST model we defined here, we can observe the difference between peak memory usage of DDP and FSDP. +Considering the toy example and tiny MNIST model we defined here, we can observe the difference between peak memory usage of DDP and FSDP. In DDP each process holds a replica of the model, so the memory footprint is higher compared to FSDP which shards the model parameters, optimizer states and gradients over DDP ranks. -The peak memory usage using FSDP with auto_wrap policy is the lowest followed by FSDP and DDP. +The peak memory usage using FSDP with auto_wrap policy is the lowest followed by FSDP and DDP. Also, looking at timings, considering the small model and running the training on a single machine, FSDP with and without auto_wrap policy performed almost as fast as DDP. This example does not represent most of the real applications, for detailed analysis and comparison between DDP and FSDP please refer to this `blog post `__ . diff --git a/intermediate_source/FSDP_tutorial.rst b/intermediate_source/FSDP_tutorial.rst index 98258716432..f8ee1a7a3de 100644 --- a/intermediate_source/FSDP_tutorial.rst +++ b/intermediate_source/FSDP_tutorial.rst @@ -1,10 +1,10 @@ -Getting Started with Fully Sharded Data Parallel(FSDP) +Getting Started with Fully Sharded Data Parallel (FSDP2) ====================================================== **Author**: `Wei Feng `__, `Will Constable `__, `Yifan Mao `__ .. note:: - |edit| Check out the code in this tutorial from `pytorch/examples `__. + |edit| Check out the code in this tutorial from `pytorch/examples `_. FSDP1 will be deprecated. The old tutorial can be found `here `_. How FSDP2 works -------------- @@ -166,7 +166,7 @@ Explicit prefetching works well in following situation: Enabling Mixed Precision ~~~~~~~~~~~~~~~ -FSDP2 offers a flexible `mixed precision policy `_ to speed up training. One typical use case are +FSDP2 offers a flexible `mixed precision policy `_ to speed up training. One typical use case is * Casting float32 parameters to bfloat16 for forward/backward computation, see ``param_dtype=torch.bfloat16`` * Upcasting gradients to float32 for reduce-scatter to preserve accuracy, see ``reduce_dtype=torch.float32`` @@ -399,8 +399,8 @@ sync_module_states=True/False: Moved to DCP. User can broadcast state dicts from forward_prefetch: Manual control over prefetching is possible with -* Manually call ``fsdp_module.unshard()`` -* Use these APIs to control automatic prefetching, ``set_modules_to_forward_prefetch`` and ``set_modules_to_backward_prefetch`` +* Manually call `fsdp_module.unshard() `_ +* Use these APIs to control automatic prefetching, `set_modules_to_forward_prefetch `_ and `set_modules_to_backward_prefetch `_ limit_all_gathers: No longer needed, because ``fully_shard`` removed cpu synchronization @@ -408,4 +408,4 @@ use_orig_params: Original params are always used (no more flat parameter) no_sync(): `set_requires_gradient_sync `_ -ignored_params and ignored_states: ignored_params +ignored_params and ignored_states: `ignored_params `_