Wednesday, March 05, 2014

Stochastic Gradient Methods 2014

Last week I attended Stochastic Gradient Methods workshop held at UCLA's IPAM . Surprisingly, there's still quite a bit of activity and unsolved questions around what is essentially, minimizing a quadratic function.

In 2009 Strohmer and Vershinin rediscovered an algorithm used for solving linear systems of equations from 1970 -- Kaczmarz method, and showed that this algorithm is a form of Stochastic Gradient. This view of SGD motivates a biased sampling strategy which gives faster convergence rate than regular Stochastic Gradient. This spurred a flurry of activity, motivating results in at least 5 different lectures.

In 2010, Nesterov showed that Randomized Coordinate Descent has a faster convergence rate than SGD, and in 2013 Singer showed a way to accelerate it to quadratic convergence. In 2013 Richtarik gave an alternative algorithm to get the same convergence rate, but also comes up with better step sizes that rely on sparsity pattern of the problem.

Summaries of talks I attended with links to slides are below:

Ben Recht

Gave an overview of Hogwild and Jellyfish methods. Hogwild has been covered a few times before at NIPS, but here's an overview slide



Jellyfish (described in their Large Scale Matrix completion paper) chooses sampling order in a way to minimize lock contention.

Also talked about their work on explaining the gap between performance of SGD sampling with replacement vs. without replacement. Empirically, without replacement works better (see Section 5 of "Beneath the valley" paper) yet until recently tools were missing to explain it. They are able to prove faster rates of no-replacement sampling for Kaczmarz algorithm by relying on "Noncommutative arithmetic-geometric mean inequality."

Resources:


  • Slides Recht - we should all run hogwild!.pdf
  • Beneath the valley of the noncommutative arithmetic-geometric mean inequality: conjectures, case-studies, and consequences. http://arxiv.org/abs/1202.4184
  • Parallel Stochastic Gradient Algorithms for Large-Scale Matrix Completion. Recht and Re. 2011.
  • HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. Niu, Recht, Re, and Wright. 2011.

Yoram Singer

Talked about accelerating coordinate descent with momentum-like approach, dubbed Generalized Accelerated Gradient Descent. Nesterov's accelerated gradient method has quadratic convergence with linear dependence on condition number of the loss
$$O\left(\frac{L}{k^2}\right)$$

Parallel coordinate descent depends on average of per-coordinate Lipschitz constants, which can be much better for badly conditioned loss:
$$O\left(\frac{\bar{L_i}}{k}\right)$$

The methods proposed has quadratic convergence of accelerated gradient, meanwhile retaining dependence on average curvature, rather than the worst
$$O\left(\frac{\bar{L_i}}{k^2}\right)$$

Resources:



Dimitri Bertsekas

In-depth tutorial "Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Unified Framework"

One slide that stuck out is the one-dimensional illustration of why SGD works.


In the farout region, all gradients are pointing in the same direction, so taking gradient step with respect to a single component function works just as well as looking at the full sum.

This also serves as the motivation for "heavy ball" method (Polyak, 1964). When you are in farout region, you want to accelerate, while in confusion region, you want to decelerate, you can accomplish this by modifying gradient update formula as follows

$$x_{x+1} = x_k-\alpha_k \nabla f_{i_k}(x_k)+\beta_k(x_k-x_{k-1})$$

This is similar in spirit to "Accelerated Stochastic Approximation" of Kesten (1958) which grows the step size if the difference between successive $x$'s is the same sign, and shrinks if there are many sign changes.

Schmidt said Stochastic Averaged Gradient works better than Kesten's approach in a multi-dimensional setting.

Resources:



Peter Richtarik


Gave overview of his "Accelerated, Parallel and Proximal Coordinate Descent". It gives a technical improvement over his previous work "Distributed coordinate descent method for learning with big data" (http://arxiv.org/abs/1310.2059) which seems to have the meat of the contributions.

Here's a slide from his talk comparing various methods.


"Prox" column means the algorithm can take proximal steps, i.e., can be used with constraints and not-nice regularizers. "Accel" or Accelerated is whether the method is enjoys $O(1/k^2)$ convergence rate where $k$ is the iteration counter. "General f" means it applies for convex problems rather than quadratic. "Block" is whether method can update some of the coordinates at a time rather than all coordinates.

Setting of the problem is summarized in slide below


You are optimizing a sum of losses $f_e$, and not all losses depend on all examples. You want to update your sets of coordinates in blocks, in parallel. Sets of variables involved for each $f_e$ determine how well you can parallelize the problem. In half-a-dozen papers on his website he develops framework dubbed Expected Separable Overapproximation (ESO) to analyze such problems.

One outcome of ESO approach is a formula that incorporates sparsity of the problem into calculation of step size. See Table 3 of his approx paper (http://arxiv.org/pdf/1312.5799v1.pdf)

$$v_i = \sum_{j=1}^m (1+\frac{(\omega_j-1)(\tau-1)}{\max(1,n-1)}A_{ji}^2$$

This is formula for step size for coordinate $i$ in randomized coordinate descent, computed as a sum over examples $j$. Quantity $\omega_j$, is the number of components of vector that example $x_j$ depends on, $n$ is dimensionality, and $\tau$ is the number of coordinates are updated in parallel. $A$ is the matrix of quadratic minimization problem, replaced with matrix of per-coordinate Lipschitz constant for general convex problems.

Resources:



Rachel Ward and Deanna Needell


Gave background on their paper "Stochastic Gradient Descent and the Randomized Kaczmarz Algorithm". The setting:



Further in presentation, developed importance sampling for SGD. Traditionally, SGD picks random component of the sum above, and the number of steps required to reach given accuracy is proportional to the worst condition number (Lipshitz constant) over per-example losses.

Derived following formula for the number of steps needed to reach given accuracy $\epsilon$ with uniform sampling

$$k \propto \log \epsilon (\sup_i \frac{L_i}{\mu} + \epsilon^{-1} \frac{\sigma^2}{\mu^2})$$

For quadratics, the first term is close to largest condition number out of all component functions $f_i$, except you are normalizing by global smallest eigenvalue $\mu$, rather than per-component smallest eigenvalue $\mu_i$. The second term is "normalized consistency" - expected squared norm of the squared gradient divided by smallest eigenvalue squared.

Instead of uniform sampling, we can sample examples in linear proportion to Lipshitz constant for gradient of loss on that example. This cuts down the number of steps needed to average Lipschitz constant, normalized by strong convexity parameter $mu$, rather than largest Lipschitz constant. Since Lipschitz constant is the upper bound on the largest eigenvalue of the Hessian, this means number of steps grows in proportion to average condition number rather than the largest condition number.

The term involving Lipschitz constant now drops from max to average. In other words we get this:
$$\sup_i \frac{L_i}{\mu} \to \frac{\bar{L}}{\mu}$$

The second (consistency) term can instead potentially get larger, we get
$$\frac{\sigma^2}{\mu^2}\to \frac{\bar{L}\sigma^2}{\inf_i L_i \mu^2}$$

The best trade off depends on details of function -- badly conditioned, but accurate gradients -- sample proportionally to Lipshitz. Well conditioned and noisy gradients, do closer to uniform. She shows that if we sample halways between uniform and Lipshitz, so called "partially biased sampling", both terms are guaranteed to be smaller than for uniform sampling.

Yuriy Nesterov obtained similar bounds for sampling strategy and convergence in his "Efficiency of coordinate descent methods on huge-scale optimization problems". Key difference is that he samples which coordinate to update at each step, instead of sampling examples. Optimal sampling strategy comes down to picking coordinates in linear proportion to their Lipschitz constant, and the convergence rate also drops to the average of per-coordinate Lipschitz constants rather than the worst Lipschtitz constant. Roughly speaking, number of steps till convergence goes down to average eigenvalue of Hessian rather than worst eigenvalue.

Deanna Needell gave background on the Kaczmarz Algorithm which gives alternative way to motivate importance sampling results. In particular, first few slides illustrate why the order matters. She also gives analytic expression to find the best next point to sample for the quadratic case. This requires $O(\text{# of examples})$ search at each iteration. She then shows approximation approach based on dimensionality reduction that takes $O(1)$ time per step.

Ben Recht's made a similar point on impact of choosing better ordering in his presentation



Resources:



Stephen Wright

Started with a nomenclature discussion on how "Stochastic Gradient Descent" methods don't qualify as gradient descent, because SGD steps can be in ascent directions for the global cost function. Instead, they should be referred to as "Stochastic Gradient" methods. Every speaker afterwards corrected themselves on the usage.

Gave overview of parallel Kaczmarz method and then extended analysis to get convergence rate for parallel Kaczmars with "Inconsistent Read" allowed -- situation where the parameter vector gets modified while it's being read.

Resources:




Yann LeCun

Gave background on convolutional neural networks and showed demo of online learning using ImageNet. Basically it was network running network pre-trained on ImageNet, and using nearest neighbor in the embedding induced by activations of the last layer.



Impressively, it seems to do a good job learning to recognize from just a single example.

Also talked about connections between neural network learning and random matrix theory. You can see the connection if you rewrite activations of ReLU neural network as follows

$$\sum_P C(x) \prod_{(i,j) \in P} W_{i,j}$$

The sum is over all paths through active nodes from input layer to output node. Coefficients $C_x$ depend on input data. This is a polynomial with degree equal to the number of layers, and there are results from random matrix theory says that if coefficients $C(x)$ are Gaussian distributed, then local minima are close together in energy, so essentially, finding local minimum is as good as finding global minimum.

Resources:



Francis Bach

Presented results on convergence rates of SGD and how they are affected by lack of strong convexity.

Resources:




Jorge Nocedal

Talked about adaptation of quasi-Newton method to stochastic setting. Convergence of SGD depends on square of condition number, meanwhile Newton's method is independent of condition number, at the cost of step size that that costs $O(\text{dimensions}^2)$

The compromise he proposes is to do BFGS-like method where
1. You use exact Hessian information to compute product of Hessian and step direction
2. You only do it once every 20 iterations

This makes the cost of l-BFGS-like update similar to SGD update.

Resources



Asuman Ozdaglar

Introduced a way to extend ADMM to graph-structured problems without having to choose the order of updates. The setting of problem is summarized below


As you may recall, ADMM works by decoupling components of the loss by having each loss operate on their own copy of the parameters. You alternate between each function minimizing itself locally with their own copy of parameters, and setting values of parameters locally from the functions that have already been minimized.

This steps can be implemented as message passing on a factor graph -- factors here are components of the cost function, whereas nodes are variables that the cost function depends on. Each component function depends on a subset of variable, and each variable is involved ina  subset of component functions.

Implementation of ADMM is similar to Divide and Concur, where a readable overview is given in Yedidia's Message-Passing paper.



One inconvenience of this approach is that it requires establishing an arbitrary order of message updates.

Ozdaglar's idea is to introduce the symmetry by adding extra variables, one for each direction of the constraint variable, and adding an extra constraint that forces them to agree. The update is done in parallel, like parallel BP, followed by an extra step that synchronizes the extra constraint variables.

Resources:



John Duchi

John Ducchi gave a white-board talk on convergence of 0-th order optimization. Happily, the convergence is only a factor of sqrt(dimensions) worse than standard SGD.

Started with succinct derivation of non-asymptotic error of proximal average algorithm, which looks a lot like averaged SGD, after $k$ steps, in terms of errors of gradients. The actual formula has no O-terms and proof is found in notes, but roughly it looks like this

$$E(\text{error}) <= O(\frac{1}{\sqrt{k}})+\frac{1}{k}\sum_{i=1}^{k} E[\|\epsilon_i\|]$$

Error here is in terms of value of the function, which is what we care about in applications (as opposed to distance from true parameter vector). As $k$ increases, the second term vanishes and you get the regular $1/\sqrt{k}$ convergence. If you don't care about constraints, "prox" step can be replaced by an SGD step.

Resources:


Mark Schmidt

Gave an overview of their Stochastic Averaged Gradient algorithm. Full details and many extensions are in their hefty 45 page arxiv paper.

Their motivation is a method to combine fast initial convergence for stochastic method, and fast late-stage convergence of full-gradient methods, while keeping cheap iteration cost of stochastic gradient.



Stochastic Averaged Gradient reaches this goal with a simple modification of stochastic gradient. The idea is that at each gradient step, in addition to the gradient computed for the current data point, you also add up all the gradients computed on previous datapoints. Those gradients may be out of date, but for strongly convex loss with convex component functions, this staleness doesn't hurt.

Schmidt et al advocated sampling datapoints with high curvature more often based on the argument that such gradient might be changing faster, and needs to be evaluated more often. However, the formal justification of this intuition is not avaiable, and instead they fall back on the same analysis as Kaczmarz importance sampling described earlier.

One difference of weighted sampling from standard SGD setting is that examples can be sampled more often without needing to correct for this bias because the weight of each gradient is $1/n$ in SAG regardless of how many times the function is sampled. However, bias correction will come up as an issue in any large scale adaptation of SAG when you can't store all gradients in memory.

Resources:


Lin Xiao

Gave an overview of stochastic variance reduction gradient methods. The idea of variance reduction is to periodically evaluate full gradient, and then use it to adjust future gradient steps. If we evaluated full gradient at previous point $\tilde{x}$, formula for gradient update becomes as follows

$$x_{k+1}=x_k - \nu (\nabla f_{i_k} - \nabla f_{i_k}(\tilde{x})+\nabla F(\tilde{x}))$$

Here $\nabla{F(\tilde{x})}$ is the full gradient evaluated at some previous point $\tilde{x}$, $\nabla{f_{i_k}(x_k)}$ is the gradient evaluated at loss for current example $x_k$.

The idea of variance reduction is illustrated below.



On the left you see what would happen if you applied gradient reduction formula with $\nabla{F(\tilde{x})}$ computed at each step. That reduces to regular gradient descent. If instead we evaluate full gradient once every $k$ iterations, the correction will be based on stale value of gradient and not quite correct, however the mean error is zero so it gives an unbiased estimate of the correction term.

Then they introduce a weighted sampling strategy, where datapoints are sampled proportionally to the condition number of individual loss functions. When number of iterations is much larger than number of examples, weighted sampling strategy drops convergence to $O(C_{\text{avg}}$ steps as opposed to $O(C_\max)$ steps for uniform sampling, C_{\text{avg}} is the average condition number over all component loss functions.

Resources:



James Spall

Gave results on Stochastic Approximation methods. Aproximation can be seen as minimization of distance between solution and ideal solution, so SA methods come down to some form of stochastic optimization. The difference is that the setting is more general - non-convexity, can't compute gradients, possibly discrete problem.

Standard approach to derivative free methods is Finite Difference Stochastic Approximation (FDSA) where to numerically compute gradient, you do $2p$ function evaluations where $p$ is dimensionality.

The idea of Simultaneous Perturbation Stochastic Approximation method (SPSA) is to evaluate gradient along randomly chosen directions, and take step in that directions with step-length proportional to gradient in that direction. This requires two function evaluations instead of $2 p$ for FDSA, and works just as well.

Two summary slides from the SPSA talk:



Here was a graph of numerical simulation of SPSA vs standard approach


He gave a more in-depth overview of the methods in 2012 NIPS talk. It's available as youtube video, but here are screenshots of some intro slides.

Simple SPSA is essentially a first order method, and has the same problems as other first order methods:

  • sensitivity to upscaling of units of $\theta$
  • slow convergence in the final phase

To address these, he introduces Adaptive Stochastic Approximation by the Simultaneous Perturbation Method (ASP) which goes further by numerically estimating the Hessian in addition to the gradient.

The approach to approximating Hessian is similar in spirit to SPSA -- compute gradient in two random directions, and estimate the Hessian numerically from that (formula 2.2 in "Adaptive Stochastic Approximation by the Simultaneous Perturbation Method"). This requires 4 function evaluations.

This estimate is noisy, so use momentum to smooth Hessian estimates.

More recent work (Spall 2009) gives an improved formula for estimating the Hessian numerically using what he calls "feedback term".

Adaptive SPSA methods store Hessian approximation explicitly, like BFGS, so they aren't directly applicable to deep neural nets.

Resources:



146 comments:

  1. Awesome, thanks Yaroslav !

    Igor.

    ReplyDelete
  2. Anonymous7:59 AM

    Dear Yaroslav,

    This is a very nice summary of the talks; great job.

    Let me offer a few minor points regarding my talk:

    i) The 'Hydra' paper (Distributed Coordinate Descent) is very different from the 'APPROX' paper. In fact, there is essentially no technical intersection between the two. They are related, but in a complementary way.

    The Hydra method focuses on the computation ESO for a distributed sampling, an on proving that partitioning of coordinates among nodes at most doubles the number of iterations. The analysis applies to the strongly convex case.

    The approx method focuses on designing and analyzing accelerated coordinate descent methods which 'work'. Also, the paper comes up with new stepsize for *any* coordinate descent method based on the concept of ESO (including Hydra). That development is orthogonal to the APPROX method itself.

    ii) The 'setting' slide is from a different talk (a new analysis of Hogwild!) I gave a year ago - but the paper has not yet been put online.

    Peter

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Yaroslav,
    a useful collection, thank you !


    Would you know of standard test functions on the web that several of these people have used ?
    There are SO many methods and variants, with not much in the way of a table

    "method, simplicity e.g. lines of code, link to online test runs".

    ReplyDelete
  5. We realize that whatever we do is a statement. Whatever conscious decision we make is a statement because it tells other people something. You see a woman walking wearing bright red lipstick; she’s making a statement. See more mba statement of purpose example

    ReplyDelete
  6. Anonymous11:27 PM

    thank the good topic.
    Welcome To Casino online Please Click the website
    thank you.
    gclub
    gclub online
    goldenslot

    ReplyDelete
  7. Some methods are really useful while you are doing your academic things like this stochastic methods could have been one for you. https://www.bachelorthesis.biz/our-tips-on-how-to-write-a-perfect-thesis-for-bachelor-degree-in-psychology/ to find out more helpful tips on writing.

    ReplyDelete
  8. There is the area of the Stochastic Gradient methods in which you can see different kinds of the workshops for the better experience. Just visit site and find the best helping tools with the small steps and enjoy the best quadratic function.

    ReplyDelete
  9. I'm so positive minded person. I don't think negative so much. Mainly I know that I can do it. And If any man thinks that he will do this then he will able to do this. Nobody can stop him. And I trust this.

    ReplyDelete
  10. hima bindu4:46 AM

    Good Blog

    Usefull Blog

    Machine learning in Vijayawada

    ReplyDelete
  11. Thanks for sharing this in here. You are running a great blog, keep up this good work.
    best machine learning institute in chennai | Machine Learning course in chennai

    ReplyDelete
  12. You should follow these methods if you wanted to do good in your side. http://www.readmission.biz/our-services/dismissal-appeal-letter-template/ you will get help on the admission side.

    ReplyDelete
  13. You completed certain reliable points there. I did a search on the subject and found nearly all persons will agree with your blog.
    machine learning course in bangalore

    ReplyDelete
  14. Anonymous1:29 AM

    Excellent Blog! I would like to thank for the efforts you have made in writing this post. I am hoping the same best work from you in the future as well. I wanted to thank you for this websites! Thanks for sharing. Great websites! Now please do visit our website which will be very helpful.
    machine learning course bangalore

    ReplyDelete
  15. I'm affluent, rich, and wealthy and I live a lavish lifestyle. Education India

    ReplyDelete
  16. Nice article admin thanks for share your atricle keep share your knowledge i am waiting for your new post check the north face school uniform kindly review and reply me

    ReplyDelete
  17. Anonymous10:12 PM

    Nice article
    Please support
    Bittorrent premium apk add free us.

    ReplyDelete
  18. WhatsApp Status Video Download :WhatsApp introduced the status feature in 2015, in which we can share images, videos, and gifs as our story for 24 hours. Before this feature, WhatsApp had only text status option in which we can write our bio, but the new status feature is different. The story or status disappears after 24 hours and can’t be archived as still in WhatsApp.

    Boy attitude status video download for whatsApp
    Boy attitude status video download
    Boy attitude status video download

    Most romantic status video download for whatsApp
    Sad video status download
    Most Romantic status video download

    video status download for whatsApp


    we have latest & best collection of video status download for whatsapp

    ReplyDelete
  19. Anonymous3:07 AM

    For IOT Training in Bangalore Visit: IOT Training in Bangalore

    ReplyDelete
  20. Dr Driving is one of the my favourite game ever and today I am going to share Dr Driving Mod Apk
    https://www.drdrivingmodapk.xyz/

    ReplyDelete
  21. Just love your article.you might be interested in :app editor pro apk

    ReplyDelete
  22. You can also check this app :
    new armored core

    ReplyDelete
  23. Really great article, you guys are doing a great work, keep posting good stuff, If your audience is into Tech do share 30 MiUi 11 Tips and Tricks
    Gizdoc Tech Paradise

    ReplyDelete
  24. tweakbox apk good application for your android mobile btw nice blog and share good information thank you.

    ReplyDelete
  25. Marvelous work in giving the correct substance the sensible explanation. The substance looks legitimate with critical information. Unfathomable Work

    Instagram Plus APK is known worldwide as the modded version of Official Instagram with the inclusion of the latest features that Instagram lacks ...Read More

    ReplyDelete
  26. 40 Lakh mp3 song download pagalworld, tik tok viral song ,Mr jatt. GetSongName.com – Presenting the audio song ” 40 Lakh ” this song by Jerry Burj Ft. Ellde Fazilka , song is been written Ellde Fazilka40 Lakh mp3 song download pagalworld

    ReplyDelete
  27. great article you posted for us. thanks dude.
    Tekken 7 APK+ISO. In this game, the fighters fight against the enemies for their survival. The fighters have different controls and fighting styles that help in the battle. Here is the latest 2020 and updated version of taken 7 APK + ISO game. you will like it most.
    https://modsroid.com/tekken-7-apk-download-official-latest-version-2020

    ReplyDelete
  28. Everything is very open with a precise clarification of the challenges. Thank you for the beautiful blog.
    Reactjs Training in Bangalore
    UI Development Training in Bangalore

    ReplyDelete
  29. I like viewing web sites which comprehend the price of delivering the excellent useful resource free of charge. Thank you!
    machine learning course in pune

    ReplyDelete
  30. Very nice job... Thanks for sharing this amazing Machine Learning Courses In Pune and educative blog post!

    ReplyDelete
  31. Nice article. That’s exactly what I was searching for. Thanks for the info. Also you can check out FM Whatsapp details to get a better version of whatsapp.

    ReplyDelete
  32. Amazing post i really like it thanks for sharing with us!!! OGWhatsApp 2020

    ReplyDelete
  33. If you want to download Kinemaster Pro free from the official website

    ReplyDelete
  34. Download free video maker Viva Video Pro for free

    ReplyDelete
  35. Today published the new GBmods for GB Whatsapp Pro Download now

    ReplyDelete
  36. I recently came across your article and have been reading along. I want to express my admiration of your writing skill and ability to make readers read from the beginning to the end. I would like to read newer posts and to share my thoughts with yougbwhatsappfree.com/fmwhatsapp-apk

    ReplyDelete
  37. I have bookmarked your website because this site contains valuable information in it. I am really happy with articles quality and presentation. Thanks a lot for keeping great stuff. I am very much thankful for this site.notepadd ++ mac

    ReplyDelete
  38. They're produced by the very best degree developers who will be distinguished for your polo dress creating. You'll find polo Ron Lauren inside exclusive array which include particular classes for men, women.rice purit

    ReplyDelete
  39. Anonymous12:16 PM

    Thanks for share the Post. But I nedd to use the Anonytun Pro Apk So I have save your post on ES file Explorer 2020
    which elp me to make a slideshow on Filmorago Pro Apk 2020or Cute Cut Apk for Android about your article.

    ReplyDelete
  40. 2048 is an exceptionally addictive game accessible on the work area and cell phones. the version of 2048 doge doggy https://2048doge.blogspot.com/ play is quite easy to learn, however difficult to beat. It is additionally accessible on retro consoles. You can play the game on the web, or download it for iOS or Android.
    2048 cupcakes
    2048 cupcakes game
    play 2048 cupcakes

    ReplyDelete
  41. Very nice blog post. I absolutely love this site. Thanks!
    GBWhatsApp Latest Version

    ReplyDelete
  42. Anonymous7:40 AM

    Thanks so to 2048 game much, he at 2048 game has a natural rapport with people and communicates very well with others. This is a winner! You've shown a lot with 2048 game of patience with this.

    ReplyDelete
  43. I have express a few of the articles on your website now, and I really like your style of blogging. I added it to my favorite’s blog site list and will be checking back soon…
    More Info of Machine Learning

    ReplyDelete
  44. If you are looking for latest sarkari upsc cds syllabus click here

    ReplyDelete
  45. I'm happy to see the considerable subtle element here!.


    gb whatsapp download free

    ReplyDelete
  46. Anonymous10:46 PM

    123movies. my favorite’s blog site list and will be checking back soon…

    ReplyDelete
  47. download kinemaster pro is a fully unlocked video editor app for android without watermark. It is the best powerful android video editing software. You can easily edit your project videos like a professional level.

    ReplyDelete
  48. I'm happy to see the considerable subtle element here!.

    mega888
    playstar slots
    gameplay

    ReplyDelete
  49. The Las Vegas Raiders Las Vegas Raiders Game Live are a professional NFL team based in the Las Vegas metropolitan area. Raiders Football Live You can watch Las Vegas Raiders game live streaming online. Get the latest Las Vegas Raiders schedule, TV channel, news update here.Raiders Live Game

    ReplyDelete
  50. The Seattle Seahawks Seahawks Live Game are a professional NFL team based in Seattle, Washington. You can watch Seattle Seahawks game live streaming online. Seahawks Game Today Get the latest Seattle Seahawks schedule, TV channel, news update here.Seahawks Football Live

    ReplyDelete
  51. Anonymous9:17 AM

    GMA Network operates two National Pinoy TV Channel and Couple of Pinoy Radio channels. GMA TV is famous for Pinoy Television Shows, Pinoy Tambayan Lambingan, Pinoy Tambayan at Lambingan shows, Pinoy Tambayan shows.

    ReplyDelete
  52. download kinemaster diamond and start editing had quality videos.

    ReplyDelete
  53. is this the official link of movierulz.com

    ReplyDelete
  54. Anonymous2:18 AM

    thanks for sharing this amazing article with us please join me on Gb Whatsapp for further conversation.

    ReplyDelete
  55. Thanks for the wonderful post that written very nicely. Its been a good thing that i gone through this blog.

    ui development online courses
    Web Designing Online Training
    Best UI Designing Course in Bangalore
    web development courses online

    ReplyDelete
  56. Anonymous6:22 AM

    I have express a few of the articles on your website now, and I really like your style of blogging. I added it to my favorite’s blog site list and will be checking back soon…
    click now
    go here
    apkzm
    androidapps
    android apkzm

    ReplyDelete
  57. really best blog..thanks for share.

    ReplyDelete
  58. Hi This is SRK i'm suggesting you this is app Candy Crush Saga Mod Apk

    ReplyDelete
  59. If you are a farmer then check your name in Kisan Samman Nidhi Yojana List and get samman nidhi.

    ReplyDelete
  60. Anonymous9:58 PM

    SEO Services are essential when you are looking to rank your website on the search engine result pages. Promote Abhi, being the affordable SEO Company in Delhi, helps you with the result that you expect from a top SEO services company. What are you waiting for, COME ON NOW, FOLLOW MY LEAD!

    ReplyDelete
  61. Anonymous2:04 AM

    Whether you are just a start-up or have an established business, our proficient team is capable of providing you best mobile app development services in India.

    ReplyDelete
  62. amazing!i like it this is very usefull for me also check GB Instagram A

    ReplyDelete
  63. Anonymous2:38 AM

    Get Rapid Solutions For Norton Antivirus Related Problems..

    Visit US: www.norton.com/setup

    ReplyDelete
  64. Hi, I'm looking for serbian tv-series to watch so I can learn Serbian better. Any suggestions? Comedy, social realism or/and dark humor is what

    ReplyDelete
  65. Hi. Am looking for great Article

    ReplyDelete
  66. Great post I must say and thanks for the information. Education is definitely a sticky subject. However, is still among the leading topics of our time. I appreciate your post and look forward to more. what is roas

    ReplyDelete
  67. check lottery result check lottery result thelotteryresults Thunderball Results Firstly Originated On June 12, 1999. UK National Lottery Firstly Held The Thunderball Lottery Game,

    ReplyDelete
  68. GBWhatspp or GB Whatsapp latest version of modified official Whatsapp APK. here you can Download Download GBWhatsapp Anti-Ban for your device.

    ReplyDelete
  69. its been long since i saw a post that's so educative and informational. i will make sure to share this my facebook group. you can also view contents on our websites below.

    French Bulldog Puppies For Sale

    French Bulldog Breeders

    French Bulldog Puppies For Sale Near Me

    French Bulldog Puppies For adoption

    French Bulldog Puppies


    Blue French Bulldog Puppies

    ReplyDelete
  70. If you want to share your pictures and videos with your friends and other peoples you need to use Social media network Instagram apk that will help you to do that.

    ReplyDelete
  71. Hii I Am Form
    WhatsApp Group Links
    Your Content Was Very Nice Thanks For You

    ReplyDelete
  72. I love your creativity. Are you also searching for nursing writing services? we are the best solution for you. We are best known for delivering nursing writing services to students without having to break the bank.

    ReplyDelete
  73. I love it here. Keep sharing your good vibes. I love them Are you also searching for dissertation writing help? we are the best solution for you. We are best known for delivering cheap assignments to students without having to break the bank

    ReplyDelete
  74. I love it here. Keep sharing your good vibes. I love them Are you also searching for assignment help UK? we are the best solution for you. We are best known for delivering cheap assignments to students without having to break the bank

    ReplyDelete
  75. Cyprus has a corporate tax rate of 20%. Companies that operate under VAT have to pay tax on purchases at 19%. Certain services, like those related to some of the road passenger transport services, domestic passenger transport by sea, hotel accomodation, restaurants, and others, benefit from a 9% VAT rate. http://www.confiduss.com/en/jurisdictions/cyprus/business/company-formation/

    ReplyDelete
  76. Cyprus has a corporate tax rate of 20%. Companies that operate under VAT have to pay tax on purchases at 19%. Certain services, like those related to some of the road passenger transport services, domestic passenger transport by sea, hotel accomodation, restaurants, and others, benefit from a 9% VAT rate. http://www.confiduss.com/en/jurisdictions/cyprus/business/company-formation/

    ReplyDelete
  77. recently i have installed Nitro dash mod apk and Kinemaster mod apk on my phone. but im not able to run it in my android 4.4

    ReplyDelete
  78. Anonymous1:57 AM

    This is a very nice one and gives in-depth information. I am really happy with the quality and presentation of the article.
    Java course in Chennai

    ReplyDelete
  79. Hi Everyone , Track your Parcel and showing a good result,Intelcom Tracking

    ReplyDelete
  80. Download GBWhatsApp APK Latest Version 2021 Anti-Ban. You Can Use without Any Temporary Ban issue gb whatsapp pro Enjoy Latest GB Whatsapp

    ReplyDelete
  81. This comment has been removed by the author.

    ReplyDelete
  82. In the pursuit of precision for Smart digital marketing, Creative website designing, Professional Web development solutions along with Realistic SEO Services, our customers often redefine their business at the castle of NetKing. We are a group of enthusiastic thinkers, innovators, and craftsmen who live in the world of the digital revolution. If you are looking for an SEO reseller, a trusted SEO services company in India, with a focused approach; allow Netking to strategize a better business proposition for you. With creativity, ideas, and content Netking has established itself as one of the most trusted Digital Marketing Companies in India.

    ReplyDelete
  83. i liked how you share lot of info but downloading Cinema HD app is the priority of mine currently

    ReplyDelete
  84. Youtube thumbnail download is an amazing tool which helps us to download youtube thumbnail.thanks for helping.

    ReplyDelete
  85. My friend suggest me your website and i always feel happy while reading your article Find Best Name and Quotes Ideas

    ReplyDelete
  86. Watch Live IPL Matches with the Latest Thoptv IPL App 2021. IPL (Indian Premier League) is one of the most anticipated sports leagues this year. This cricket tournament is marked as a festival in India (see the live broadcast of Live IPL App 2021).

    ReplyDelete
  87. cousasrahamai3:34 PM

    I found this post very helpful and would appreciate it if you could keep updating it. I also love reading other topics that are posted on your blog, please be sure to write some more!
    octopath traveler crack pc

    ReplyDelete
  88. With the pdf mixer crack you will be able to mix PDF pages of 2 or more PDF files, add PDF pages into existing PDF files and save them to a new file. With PDF Mixer, you can easily reorder, copy, rotate, delete, switch, and merge PDF pages in multiple PDF files.

    ReplyDelete
  89. Buy indoor plants online near dwarka, new delhi - Birthright offers indoor plants for home online in India. At our online nursery, We have a team of talented brand shapers who have the capability, passion and knowledge to deal with clients.
    As a Graphic Design Services In India, We have a team of talented and Creative Graphic Designers to Create Custom Graphics.
    Get contact details and address of Renovation Services, Renovation Works firms and a home renovation services is Negotiable.
    Use the Blink Monitor App for Android to check in on what's happening at home from anywhere at any time. The app connects your home to your phone in HD video
    Getlisting Provides Complete Solution If You Facing The PlentyOfFish (POF). Can't log in? Issues in United States. If your file is being reviewed for some reason, such as being reported by another Plenty of Fish user, your account might be temporarily suspended while the company investigates.

    ReplyDelete
  90. GBWhastapp is full of features. I'm sure you won't choose others if you install it now. Check out the interesting features of
    gbwhatsapp here.

    ReplyDelete
  91. The passmark performance test 10 1 build 1005 makes it easy to test your computer and compare its performance against benchmark statistics and other users’ PCs. Benchmarking your PC is important in tuning its performance and critical for overclocking. PassMark PerformanceTest 10 Crack runs a battery of sophisticated benchmarks and returns some expert-level data.

    ReplyDelete
  92. . PassMark PerformanceTest 10 Crack runs a battery of sophisticated benchmarks and returns some expert-level data. damon ps2 emulator

    ReplyDelete
  93. gb whatsapp new version supports emoji converter option so that we can customize emoji. It has new launcher icons so we don’t get tired of using the same launcher icons for months.

    ReplyDelete
  94. Thanks for the nice blog. rivig.se

    ReplyDelete
  95. Anonymous6:20 AM

    Thanks for the nice blog. cafeale.se

    ReplyDelete

  96. I invite you to join me where I am. I apologise if this doesn't provide any assistance.
    https://crackguru.net/office-tab-enterprise-13-10-crack/

    ReplyDelete
  97. Mira Coreano, Chino, Tailandés, Japonés, Estrenos Doramas con subtítulos en
    español gratis en línea, doramas español online, ver los últimos capítulos de

    ReplyDelete
  98. Anupama started a new chapte

    ReplyDelete
  99. lag ja gale is an Indian Hindi-language soap opera that is upcoming drama
    premiered on 7th February 2023 on Zee Tv with huge fan following of and TRP.

    ReplyDelete
  100. Nice article.Thanks for sharing such a good article which i can easily understand, thanks,btw if you looking for best institute for Digital Marketing Course, in pune try this.

    ReplyDelete
  101. Anonymous9:35 PM


    Very informative article. It's worth visiting and recommending. Thank you for sharing this helpful information with us
    Plots in Shadnagar

    ReplyDelete
  102. Anonymous10:23 PM

    You can use Kinemaster Without Watermark APK for exporting videos with no watermark

    ReplyDelete
  103. Anonymous11:53 AM

    The article "Stochastic Gradient Methods 2014" provides a comprehensive exploration of stochastic gradient methods and their applications in the field of optimization. The authors delve into the fundamental principles behind these methods and highlight their effectiveness in solving large-scale optimization problems. love name compatibility

    ReplyDelete
  104. I have got very good information in this article and thank you for giving such information in every article. बिना वाटरमार्क के कीनमास्टर डाउनलोड करें।

    ReplyDelete
  105. Kinemaster mod apk has many great features that make it easy for anyone to create amazing videos

    ReplyDelete
  106. Anonymous7:45 AM

    Thanks for sharing amazing information keep posting. 9th Class Result 2023

    ReplyDelete

  107. "Hey there! I've been hearing a lot about GB WhatsApp Pro and its awesome features. Can anyone guide me on how to download it safely? I want to make sure I'm getting it from a reliable source. Thanks in advance for your help!"

    ReplyDelete
  108. The writing style is so inviting. It's like having a conversation with a friend. Thanks for sharing this brilliant article.
    outlook

    ReplyDelete
  109. Nice post, I found it interesting; keep sharing and entertain us with new topics and with your brilliant content writing.
    hotmail.com

    ReplyDelete
  110. Thanks sir sharing your article
    fescobill

    ReplyDelete
  111. Anonymous3:12 AM

    I'm always looking forward to your next blog post. Your content is both educational and enjoyable to read."
    draft with ai pickup lines

    ReplyDelete
  112. Your comments are a constant source of intellectual renewal, bringing a sense of revitalization to our discussions.
    www.gmail.com

    ReplyDelete
  113. Major props to Machine Learning, etc for quality insights. Uncover more brilliance at hotmail

    ReplyDelete
  114. Anonymous10:21 PM

    The Kinetic Energy Calculator is a convenient online tool that allows users to quickly determine the kinetic energy of an object based on its mass and velocity, making energy calculations effortless and accurate. It's a valuable resource for students, engineers, and anyone interested in understanding the energy of moving objects.

    ReplyDelete