From mboxrd@z Thu Jan 1 00:00:00 1970 From: Allen Li Subject: Re: [IMPORTANT] Server migration: please update your git repositories before 31/12/2017 Date: Sat, 30 Dec 2017 13:50:03 -0800 Message-ID: References: <87d12yojyn.fsf@bzg.fr> <87r2rd67q6.fsf@gnu.org> <87d12w77i4.fsf@gnu.org> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Return-path: Received: from eggs.gnu.org ([2001:4830:134:3::10]:55297) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eVP15-0000FH-3i for emacs-orgmode@gnu.org; Sat, 30 Dec 2017 16:50:08 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eVP14-0004HU-9X for emacs-orgmode@gnu.org; Sat, 30 Dec 2017 16:50:07 -0500 In-Reply-To: <87d12w77i4.fsf@gnu.org> List-Id: "General discussions about Org-mode." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: emacs-orgmode-bounces+geo-emacs-orgmode=m.gmane.org@gnu.org Sender: "Emacs-orgmode" To: Bastien Cc: Achim Gratz , emacs-orgmode@gnu.org On Sat, Dec 30, 2017 at 3:57 AM, Bastien wrote: > Hi Achim, > > Achim Gratz writes: > >> Am 29.12.2017 um 13:26 schrieb Bastien Guerry: >>> Migrating to a new vultr instance was easier than trying to upgrade >>> the rackspace hosting service and the vultr pricing is better. >> >> It's water under the bridge now, but if there had been a discussion >> here we might have converged to a different solution. > > Yes, I should have raised the issue on the list to see if people would > come up with preferrable solutions, apologies for that. > > But I had very little time and the clock was ticking. > > Since I wasn't sure I could follow a potentially long discussion with > many suggestions, and since the solution I envisioned does not impact > regular users, I thought it was best to *just do it*. > > Nothing is irreversible, my time is gone anyway. > > So if you want to open a discussion on better hosting plans and if you > or someone else is willing to handle the migration and to maintain the > server afterwards, we can of course discuss this. It sounds like we already have a solution, so I don't suggest anyone spend more time on this since I am sure there are lots of bugs that would be worth fixing. I don=E2=80=99t want to blame anyone, just to clarify the state of affairs.= It looks like Rackspace failed to communicate properly and on short notice. People simply make mistakes, including myself, so I don=E2=80=99t think poi= nting fingers is productive.