SSH connections to hydra-slave{1,2,3} fail during builds

  • Done
  • quality assurance status badge
Details
4 participants
  • Ludovic Courtès
  • Marius Bakke
  • Mark H Weaver
  • Ricardo Wurmus
Owner
unassigned
Submitted by
Mark H Weaver
Severity
important
M
M
Mark H Weaver wrote on 22 Jul 2019 01:56
New linux-libre failed to build on armhf on Berlin
(address . bug-guix@gnu.org)
87o91n3ps0.fsf@netris.org
In commit 1ad9c105c208caa9059924cbfbe4759c8101f6c9, I changed our
linux-libre packages to deblob the linux-libre source tarballs
ourselves, i.e. to run the deblobbing scripts provided by the
linux-libre project to produce linux-libre source tarballs from the
upstream linux tarballs:


The following queries show that the updated packages built successfully
on x86_64, i686, and aarch64, but they all failed on armhf:


Unfortunately, I'm unable to get *any* information about what went wrong
from Cuirass. None of the failed builds have associated log files, and
the build details page has no useful information either. For example:


My first guess was that something went wrong in the 'computed' origin
that runs the deblobbing script. However, that's apparently not the
case, because all of the updated 'linux-libre-headers' packages built
successfully on armhf, and those use the same source tarballs as the
main 'linux-libre' packages.


Can someone help me find out what's going on here? Until then, I'm
sorry to say that armhf-linux users will be unable to update their
systems.

Mark
M
M
Mark H Weaver wrote on 22 Jul 2019 02:35
(no subject)
(address . control@debbugs.gnu.org)
87imru52ix.fsf@netris.org
severity 36754 important
thanks
R
R
Ricardo Wurmus wrote on 22 Jul 2019 18:10
Re: bug#36754: New linux-libre failed to build on armhf on Berlin
(address . mhw@netris.org)(address . 36754@debbugs.gnu.org)
87pnm2cant.fsf@elephly.net
Mark H Weaver <mhw@netris.org> writes:

Toggle quote (6 lines)
> Unfortunately, I'm unable to get *any* information about what went wrong
> from Cuirass. None of the failed builds have associated log files, and
> the build details page has no useful information either. For example:
>
> https://ci.guix.gnu.org/build/1488517/details

On that page I see a link to the build log, but it appears to be
truncated:


Maybe the build node died before the build could be completed?

--
Ricardo
M
M
Mark H Weaver wrote on 22 Jul 2019 19:13
(name . Ricardo Wurmus)(address . rekado@elephly.net)(address . 36754@debbugs.gnu.org)
87zhl656xk.fsf@netris.org
Hi Ricardo,

Interesting. I distinctly remember that there was no log file when I
looked last time. Hmm.

Anyway, it seems that now, all of the failed builds have either build
logs available or else information about which dependency failed. I
don't remember seeing any of this last time, but I'm glad to see it now.

A pattern has now emerged, but I don't know what it means. All of the
armhf kernel builds failed except for linux-libre-arm-veyron-5.2.2,
which succeeded:


Apart from this anomalous success, all of the armhf 5.2.2 and 4.19.60
have a truncated log file:


This pattern seems too regular to be a coincidence. Can we find out
which build machines were used for these builds?

All of the 4.14.134 builds failed in the deblobbing step, due to timeout
(1 hour of silence) while packing the linux-libre tarball:


I'm not sure how to deal with this. This is a computed origin, not a
normal package, and so I don't see a way to configure a longer timeout.

Perhaps I should make the tarball packing and unpacking operations
verbose, to work around the issue. Of course that's our usual practice,
but I find it suboptimal because any warnings will be buried in a
mountain of uninteresting output.

Thoughts? Anyway, thanks for looking into it.

Mark
M
M
Marius Bakke wrote on 23 Jul 2019 18:46
(address . 36754@debbugs.gnu.org)
87pnm04s21.fsf@devup.no
Mark H Weaver <mhw@netris.org> writes:

Toggle quote (28 lines)
> Hi Ricardo,
>
> Interesting. I distinctly remember that there was no log file when I
> looked last time. Hmm.
>
> Anyway, it seems that now, all of the failed builds have either build
> logs available or else information about which dependency failed. I
> don't remember seeing any of this last time, but I'm glad to see it now.
>
> A pattern has now emerged, but I don't know what it means. All of the
> armhf kernel builds failed except for linux-libre-arm-veyron-5.2.2,
> which succeeded:
>
> https://ci.guix.gnu.org/build/1488502/details (arm-veyron-5.2.2)
>
> Apart from this anomalous success, all of the armhf 5.2.2 and 4.19.60
> have a truncated log file:
>
> https://ci.guix.gnu.org/build/1488517/details (5.2.2)
> https://ci.guix.gnu.org/build/1488503/details (4.19.60)
> https://ci.guix.gnu.org/build/1488513/details (arm-generic-5.2.2)
> https://ci.guix.gnu.org/build/1488519/details (arm-generic-4.19.60)
> https://ci.guix.gnu.org/build/1488504/details (arm-omap2plus-5.2.2)
> https://ci.guix.gnu.org/build/1488501/details (arm-omap2plus-4.19.60)
>
> This pattern seems too regular to be a coincidence. Can we find out
> which build machines were used for these builds?

I tried building 5.2.2 'interactively' on Berlin, and got an SSH error:

CC [M] net/openvswitch/vport-geneve.o
CC [M] net/openvswitch/vport-gre.o
LD [M] net/openvswitch/openvswitch.o
;;; [2019/07/23 05:14:53.501502, 0] read_from_channel_port: [GSSH ERROR] Error reading from the channel: #<input-output: channel (closed) 14c0e60>
Backtrace:
16 (apply-smob/1 #<catch-closure b79640>)
In ice-9/boot-9.scm:
705:2 15 (call-with-prompt _ _ #<procedure default-prompt-handle…>)
In ice-9/eval.scm:
619:8 14 (_ #(#(#<directory (guile-user) bfb140>)))
In guix/ui.scm:
1747:12 13 (run-guix-command _ . _)
In guix/scripts/offload.scm:
781:22 12 (guix-offload . _)
In ice-9/boot-9.scm:
829:9 11 (catch _ _ #<procedure 7f576678d910 at guix/ui.scm:703…> …)
829:9 10 (catch _ _ #<procedure 7f576678d928 at guix/ui.scm:826…> …)
In guix/scripts/offload.scm:
580:19 9 (process-request _ _ _ _ #:print-build-trace? _ # _ # _)
531:6 8 (call-with-timeout _ _ _)
361:2 7 (transfer-and-offload #<derivation /gnu/store/yfns7ga4…> …)
In ice-9/boot-9.scm:
829:9 6 (catch _ _ #<procedure dbdab0 at guix/scripts/offload.…> …)
In guix/scripts/offload.scm:
385:6 5 (_)
In guix/store.scm:
1203:15 4 (_ #<store-connection 256.99 19a0ba0> _ _)
692:11 3 (process-stderr #<store-connection 256.99 19a0ba0> _)
In guix/serialization.scm:
87:11 2 (read-int _)
73:12 1 (get-bytevector-n* #<input-output: channel (closed) 14…> …)
In unknown file:
0 (get-bytevector-n #<input-output: channel (closed) 14c…> …)

ERROR: In procedure get-bytevector-n:
Throw to key `guile-ssh-error' with args `("read_from_channel_port" "Error reading from the channel" #<input-output: channel (closed) 14c0e60> #f)'.
guix build: error: build of `/gnu/store/yfns7ga468vmv9jn72snk79b16p8mhfa-linux-libre-5.2.2.drv' failed

real 637m24.906s
user 0m6.661s
sys 0m0.897s

Unfortunately I failed to record which machine was used and don't know a
way to find out after the fact.
M
M
Mark H Weaver wrote on 23 Jul 2019 19:33
Re: bug#36754: SSH connections to hydra-slave{1,2,3} fail during builds (was: New linux-libre failed to build on armhf on Berlin)
(name . Marius Bakke)(address . mbakke@fastmail.com)
87ef2gr6z3.fsf@netris.org
retitle 36754 SSH connections to hydra-slave{1,2,3} fail during builds
thanks

Hi,

I've added Ludovic to the CC list, since he recently added
hydra-slave{1,2,3} to Berlin.

Marius wrote:
Toggle quote (44 lines)
> I tried building 5.2.2 'interactively' on Berlin, and got an SSH error:
>
> CC [M] net/openvswitch/vport-geneve.o
> CC [M] net/openvswitch/vport-gre.o
> LD [M] net/openvswitch/openvswitch.o
> ;;; [2019/07/23 05:14:53.501502, 0] read_from_channel_port: [GSSH ERROR] Error reading from the channel: #<input-output: channel (closed) 14c0e60>
> Backtrace:
> 16 (apply-smob/1 #<catch-closure b79640>)
> In ice-9/boot-9.scm:
> 705:2 15 (call-with-prompt _ _ #<procedure default-prompt-handle…>)
> In ice-9/eval.scm:
> 619:8 14 (_ #(#(#<directory (guile-user) bfb140>)))
> In guix/ui.scm:
> 1747:12 13 (run-guix-command _ . _)
> In guix/scripts/offload.scm:
> 781:22 12 (guix-offload . _)
> In ice-9/boot-9.scm:
> 829:9 11 (catch _ _ #<procedure 7f576678d910 at guix/ui.scm:703…> …)
> 829:9 10 (catch _ _ #<procedure 7f576678d928 at guix/ui.scm:826…> …)
> In guix/scripts/offload.scm:
> 580:19 9 (process-request _ _ _ _ #:print-build-trace? _ # _ # _)
> 531:6 8 (call-with-timeout _ _ _)
> 361:2 7 (transfer-and-offload #<derivation /gnu/store/yfns7ga4…> …)
> In ice-9/boot-9.scm:
> 829:9 6 (catch _ _ #<procedure dbdab0 at guix/scripts/offload.…> …)
> In guix/scripts/offload.scm:
> 385:6 5 (_)
> In guix/store.scm:
> 1203:15 4 (_ #<store-connection 256.99 19a0ba0> _ _)
> 692:11 3 (process-stderr #<store-connection 256.99 19a0ba0> _)
> In guix/serialization.scm:
> 87:11 2 (read-int _)
> 73:12 1 (get-bytevector-n* #<input-output: channel (closed) 14…> …)
> In unknown file:
> 0 (get-bytevector-n #<input-output: channel (closed) 14c…> …)
>
> ERROR: In procedure get-bytevector-n:
> Throw to key `guile-ssh-error' with args `("read_from_channel_port" "Error reading from the channel" #<input-output: channel (closed) 14c0e60> #f)'.
> guix build: error: build of `/gnu/store/yfns7ga468vmv9jn72snk79b16p8mhfa-linux-libre-5.2.2.drv' failed
>
> real 637m24.906s
> user 0m6.661s
> sys 0m0.897s

Thank you, this is helpful.

Toggle quote (3 lines)
> Unfortunately I failed to record which machine was used and don't know a
> way to find out after the fact.

I believe it was hydra-slave2, one of the three armhf machines that I
host which were formerly part of hydra.gnu.org's build farm and were
recently added to Berlin by Ludovic. I checked hydra-slave{1,2,3} for
build log files corresponding to the derivation above, and found that
all three of them have been attempted recently:

hydra-slave2 attempted to build it on July 23 08:07 UTC.
hydra-slave3 attempted to build it on July 22 16:40 UTC.
hydra-slave1 attempted to build it on July 22 04:44 UTC.

To be precise, each of those dates correspond to the end of the build
attempt. All three build logs are truncated on the build machine as
well, with no error message at the end.

I now believe that these failures are related to the newly added armhf
build slaves, and that they have nothing to do with the recent changes
to our linux-libre packages.

Well, except for the silence timeout that sometimes happens on slower
machines while deblobbing linux-libre. That's a separate issue.

Thanks,
Mark
M
M
Mark H Weaver wrote on 23 Jul 2019 19:49
(name . Marius Bakke)(address . mbakke@fastmail.com)
878ssor680.fsf@netris.org
I wrote earlier:
Toggle quote (4 lines)
> I now believe that these failures are related to the newly added armhf
> build slaves, and that they have nothing to do with the recent changes
> to our linux-libre packages.

I should mention that the armhf build slaves are on a private network,
and I use my public-facing internet server to forward TCP connections to
them, using the following entries in /etc/inetd.conf:

Toggle snippet (7 lines)
# TCP-level forwards for SSH connections to build machines for the GNU
# Guix build farm:
7275 stream tcp nowait nobody /bin/nc /bin/nc -w 10 172.19.189.11 7275
7276 stream tcp nowait nobody /bin/nc /bin/nc -w 10 172.19.189.12 7276
7274 stream tcp nowait nobody /bin/nc /bin/nc -w 10 172.19.189.13 7274

It's possible that this arrangement is somehow part of the problem.
However, note that nothing has changed here in several years, and it
worked fine on hydra.gnu.org. The build slaves were running a *very*
old version of Guix though. It seems likely that the new Guile-SSH code
doesn't cope well with this setup.

Mark
L
L
Ludovic Courtès wrote on 23 Jul 2019 23:26
Re: bug#36754: SSH connections to hydra-slave{1, 2, 3} fail during builds
(name . Mark H Weaver)(address . mhw@netris.org)
87lfwo1lz0.fsf@gnu.org
Hi Mark,

Mark H Weaver <mhw@netris.org> skribis:

Toggle quote (21 lines)
> I wrote earlier:
>> I now believe that these failures are related to the newly added armhf
>> build slaves, and that they have nothing to do with the recent changes
>> to our linux-libre packages.
>
> I should mention that the armhf build slaves are on a private network,
> and I use my public-facing internet server to forward TCP connections to
> them, using the following entries in /etc/inetd.conf:
>
> # TCP-level forwards for SSH connections to build machines for the GNU
> # Guix build farm:
> 7275 stream tcp nowait nobody /bin/nc /bin/nc -w 10 172.19.189.11 7275
> 7276 stream tcp nowait nobody /bin/nc /bin/nc -w 10 172.19.189.12 7276
> 7274 stream tcp nowait nobody /bin/nc /bin/nc -w 10 172.19.189.13 7274
>
> It's possible that this arrangement is somehow part of the problem.
> However, note that nothing has changed here in several years, and it
> worked fine on hydra.gnu.org. The build slaves were running a *very*
> old version of Guix though. It seems likely that the new Guile-SSH code
> doesn't cope well with this setup.

I noticed that connections to the machines were unstable (using
OpenSSH’s client). That is, the connection would eventually “hang”,
apparently several times a day.

Currently we have an SSH tunnel set up on berlin to connect to each of
these machines via overdrive1.guixsd.org. This setup proved to be
robust in the past (we used it to connect to another build machine), so
I suspect something’s wrong on “your” end of the network. It’s hard to
tell exactly what, though.

Ideas?

If it’s causing build failures, I’m afraid we’ll have to comment out
those machines from berlin’s machines.scm until we’ve figured it out.

Thanks,
Ludo’.
R
R
Ricardo Wurmus wrote on 23 Jul 2019 23:55
(name . Ludovic Courtès)(address . ludo@gnu.org)
87pnm0bem6.fsf@elephly.net
Ludovic Courtès <ludo@gnu.org> writes:

Toggle quote (6 lines)
> Currently we have an SSH tunnel set up on berlin to connect to each of
> these machines via overdrive1.guixsd.org. This setup proved to be
> robust in the past (we used it to connect to another build machine), so
> I suspect something’s wrong on “your” end of the network. It’s hard to
> tell exactly what, though.

FWIW by the end of this week we should have the firewall changes
implemented so we can do without the SSH tunnel.

--
Ricardo
M
M
Mark H Weaver wrote on 24 Jul 2019 13:09
(name . Ludovic Courtès)(address . ludo@gnu.org)
87ef2fpu2a.fsf@netris.org
Hi Ludovic,

Ludovic Courtès <ludo@gnu.org> wrote:
Toggle quote (12 lines)
> I noticed that connections to the machines were unstable (using
> OpenSSH’s client). That is, the connection would eventually “hang”,
> apparently several times a day.
>
> Currently we have an SSH tunnel set up on berlin to connect to each of
> these machines via overdrive1.guixsd.org. This setup proved to be
> robust in the past (we used it to connect to another build machine), so
> I suspect something’s wrong on “your” end of the network. It’s hard to
> tell exactly what, though.
>
> Ideas?

Okay, I'll look into it. I'm very busy with something else for the next
couple of days, but I'll try to get to it in the next week.

Toggle quote (3 lines)
> If it’s causing build failures, I’m afraid we’ll have to comment out
> those machines from berlin’s machines.scm until we’ve figured it out.

Agreed.

Thanks,
Mark
L
L
Ludovic Courtès wrote on 24 Jul 2019 16:56
(name . Mark H Weaver)(address . mhw@netris.org)
87lfwnxyzg.fsf@gnu.org
Hello,

Mark H Weaver <mhw@netris.org> skribis:

Toggle quote (16 lines)
> Ludovic Courtès <ludo@gnu.org> wrote:
>> I noticed that connections to the machines were unstable (using
>> OpenSSH’s client). That is, the connection would eventually “hang”,
>> apparently several times a day.
>>
>> Currently we have an SSH tunnel set up on berlin to connect to each of
>> these machines via overdrive1.guixsd.org. This setup proved to be
>> robust in the past (we used it to connect to another build machine), so
>> I suspect something’s wrong on “your” end of the network. It’s hard to
>> tell exactly what, though.
>>
>> Ideas?
>
> Okay, I'll look into it. I'm very busy with something else for the next
> couple of days, but I'll try to get to it in the next week.

OK!

Toggle quote (5 lines)
>> If it’s causing build failures, I’m afraid we’ll have to comment out
>> those machines from berlin’s machines.scm until we’ve figured it out.
>
> Agreed.

I’ve commented them out now.

Thanks,
Ludo’.
M
M
Marius Bakke wrote on 1 Aug 2019 16:09
87k1bx2d1f.fsf@devup.no
The truncated log files seems to happen for other builds as well, even
within the Berlin data center.


All of these builds are for i686-linux.

Mark: are the armhf nodes still operational? I would like to re-enable
them again, since we desperately need the computing power with four huge
branches going concurrently at the moment.
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCgAdFiEEu7At3yzq9qgNHeZDoqBt8qM6VPoFAl1C8nwACgkQoqBt8qM6
VPoYkwgAmYCwtCBizPMO0PQZEaX+jcl7LeRl77opAd4zdgNVoaKyukReqyprYgA3
0Uwp2j/Q4wwfVUd8GP5nKn4kdhhRWAK+DdNaOqcOyaBNH2jHhZgWSAnj9UkLwtA0
ebDC3g8AVTlFMGlVQc/21w2c1zpYOxPRIn+AdI04bLuKeAKTzJLDfRF+njE+Hf55
5w9EjyW+K6beR+A0jPLm/6Z9aig8QCxzkboSQdAEECtjMWyKTpDA1tu2YAAw27sC
pc0hpJPS/mIdNQv36B5rbJ9TCF4zp9DzQj22BkvUMaVBb3MVax5+qizE6StDFpd1
t9uKVGo5zGnPNO9/MxOj9YzMEHVAlg==
=PORt
-----END PGP SIGNATURE-----

R
R
Ricardo Wurmus wrote on 1 Aug 2019 17:39
(name . Ludovic Courtès)(address . ludo@gnu.org)
87v9vgsxnc.fsf@elephly.net
Ricardo Wurmus <rekado@elephly.net> writes:

Toggle quote (11 lines)
> Ludovic Courtès <ludo@gnu.org> writes:
>
>> Currently we have an SSH tunnel set up on berlin to connect to each of
>> these machines via overdrive1.guixsd.org. This setup proved to be
>> robust in the past (we used it to connect to another build machine), so
>> I suspect something’s wrong on “your” end of the network. It’s hard to
>> tell exactly what, though.
>
> FWIW by the end of this week we should have the firewall changes
> implemented so we can do without the SSH tunnel.

The firewall changes have been applied today.

--
Ricardo
M
M
Mark H Weaver wrote on 1 Aug 2019 18:37
(name . Marius Bakke)(address . mbakke@fastmail.com)
87pnlokfix.fsf@netris.org
Hi Marius,

Marius Bakke <mbakke@fastmail.com> wrote:

Toggle quote (9 lines)
> The truncated log files seems to happen for other builds as well, even
> within the Berlin data center.
>
> https://ci.guix.gnu.org/log/n3ra1b8ic6qhfinnhb80mrn7snsqws9d-geocode-glib-3.26.0
> https://ci.guix.gnu.org/log/zqhqlib00i8f7f10g4c2dfzprw16h4xv-scintilla-4.2.0
> https://ci.guix.gnu.org/log/718jmbq94mvdgnmjyqgxgy7zaj8xzxk3-htslib-1.9
>
> All of these builds are for i686-linux.

Thanks, that's very useful information.

Toggle quote (2 lines)
> Mark: are the armhf nodes still operational?

I assume so. They all respond to pings anyway, and I haven't touched
them since before they were disconnected from Berlin. (I would need to
boot up my other, more secure computer to try SSHing into them).

Toggle quote (4 lines)
> I would like to re-enable them again, since we desperately need the
> computing power with four huge branches going concurrently at the
> moment.

I have no objection, but since Ludovic made the decision to disconnect
them, it would be good to hear from him first.

Thanks,
Mark
R
R
Ricardo Wurmus wrote on 1 Aug 2019 23:06
(name . Mark H Weaver)(address . mhw@netris.org)
87sgqksiic.fsf@elephly.net
Mark H Weaver <mhw@netris.org> writes:

Toggle quote (13 lines)
>> Mark: are the armhf nodes still operational?
>
> I assume so. They all respond to pings anyway, and I haven't touched
> them since before they were disconnected from Berlin. (I would need to
> boot up my other, more secure computer to try SSHing into them).
>
>> I would like to re-enable them again, since we desperately need the
>> computing power with four huge branches going concurrently at the
>> moment.
>
> I have no objection, but since Ludovic made the decision to disconnect
> them, it would be good to hear from him first.

Now that we should be able to SSH to them directly from Berlin we can
try connecting and perhaps upgrading the guix-daemon on these machines.

--
Ricardo
R
R
Ricardo Wurmus wrote on 7 Aug 2019 16:30
(name . Mark H Weaver)(address . mhw@netris.org)
87d0hhxd29.fsf@elephly.net
Ricardo Wurmus <rekado@elephly.net> writes:

Toggle quote (18 lines)
> Mark H Weaver <mhw@netris.org> writes:
>
>>> Mark: are the armhf nodes still operational?
>>
>> I assume so. They all respond to pings anyway, and I haven't touched
>> them since before they were disconnected from Berlin. (I would need to
>> boot up my other, more secure computer to try SSHing into them).
>>
>>> I would like to re-enable them again, since we desperately need the
>>> computing power with four huge branches going concurrently at the
>>> moment.
>>
>> I have no objection, but since Ludovic made the decision to disconnect
>> them, it would be good to hear from him first.
>
> Now that we should be able to SSH to them directly from Berlin we can
> try connecting and perhaps upgrading the guix-daemon on these machines.

I have removed the SSH tunnel configuration from /etc/guix/machines.scm
and re-enabled the machines.

Let’s see if this makes any difference. If not we should try to upgrade
Guix on these build machines.

--
Ricardo
L
L
Ludovic Courtès wrote on 16 Aug 2019 12:25
(name . Ricardo Wurmus)(address . rekado@elephly.net)
87blwpieyy.fsf@gnu.org
Hi,

Ricardo Wurmus <rekado@elephly.net> skribis:

Toggle quote (25 lines)
> Ricardo Wurmus <rekado@elephly.net> writes:
>
>> Mark H Weaver <mhw@netris.org> writes:
>>
>>>> Mark: are the armhf nodes still operational?
>>>
>>> I assume so. They all respond to pings anyway, and I haven't touched
>>> them since before they were disconnected from Berlin. (I would need to
>>> boot up my other, more secure computer to try SSHing into them).
>>>
>>>> I would like to re-enable them again, since we desperately need the
>>>> computing power with four huge branches going concurrently at the
>>>> moment.
>>>
>>> I have no objection, but since Ludovic made the decision to disconnect
>>> them, it would be good to hear from him first.
>>
>> Now that we should be able to SSH to them directly from Berlin we can
>> try connecting and perhaps upgrading the guix-daemon on these machines.
>
> I have removed the SSH tunnel configuration from /etc/guix/machines.scm
> and re-enabled the machines.
>
> Let’s see if this makes any difference.

Is it working well now?

Toggle quote (2 lines)
> If not we should try to upgrade Guix on these build machines.

I think there’s a misunderstanding: these machines used to run a very
old Guix but I installed 1.0 from scratch before migrating them to
berlin.

Thanks,
Ludo’.
L
L
Ludovic Courtès wrote on 12 Sep 2019 10:41
(name . Ricardo Wurmus)(address . rekado@elephly.net)
875zlxrjnn.fsf@gnu.org
Hello,

AFAICS we no longer have connection issues to
hydra-slave{1,2,3}.netris.org so I’m closing this bug.

Ludo’.
Closed
?