503 Service Unavailable

2009-09-07

Reclamación a Jazztel

Filed under: Communication,Hardware,Spanish — rg3 @ 18:01

Note: the following post narrates a complaint I filed against a Spanish telco. To maximize the usefulness of this post for people in Spain having a similar problem, it’s written entirely in Spanish.

Este verano tuvimos un problema con la empresa Jazztel cuando intentamos contratar el servicio de ADSL con ellos. Por si alguien tuviera el mismo problema, a continuación describiré los hechos desde mi punto de vista y los pasos que seguimos para que la reclamación se resolviese a nuestro favor.

Los antecedentes son sencillos. Contábamos con una línea de teléfono contratada con Telefónica. El servicio ADSL nunca ha estado disponible en esta línea, por motivos técnicos que no tengo totalmente claros. Hace unos años intentamos contratar ADSL con dicha empresa, pero no funcionó y tuvimos que devolver el router, costeando nosotros los gastos de la devolución del mismo. Desde entonces, siempre hemos tenido cuidado de rechazar todas las ofertas de ADSL que recibíamos por teléfono de otras operadoras, conscientes de que no funcionaba en nuestra línea. Como prueba, siempre comprobaba la disponibilidad del servicio ADSL en el sitio web http://www.telefonicaonline.com, que indicaba que el servicio ADSL no se encontraba disponible para nuestra línea.

En el mes de abril, debido supuestamente a unas mejoras técnicas en la zona, comenzamos a recibir una nueva oleada de llamadas teléfonicas de varias operadoras ofreciéndonos ADSL, entre ellas Jazztel, aunque eso carece de importancia. Rechazamos todas las ofertas pero, sorprendidos por la nueva oleada de llamadas, comprobamos una vez más la disponibilidad de ADSL para nuestra línea. En esta ocasión, por primera vez desde hace años, se indicaba que nuestra línea soportaba un servicio básico de 1 Mb y no más. Animados por la impresión de que quizás la mejora técnica fuese real y pudiéramos, en 2009, contratar ADSL y prescindir por fin de otras alternativas más caras y peores como conexión 56k y, recientemente, conexión con módem 3G, comparamos precios y ofertas.

La oferta más atractiva era la de Jazztel. A través de su web, jazztel.es, contratamos el servicio ADSL+Llamadas con conexión de 1 Mb. No recuerdo la fecha exacta en la que se hizo esta solicitud, pero fue a finales de abril. Jazztel envía a casa un router inalámbrico que llega el día 11 de mayo. Junto con el router, recibimos bastante documentación de bienvenida y una carta clara en la que se nos indica que la activación del servicio se comunicará mediante un mensaje de texto SMS al teléfono móvil indicado en el proceso de contratación. Jazztel permite consultar el estado del proceso de alta a través de su web, e inicialmente nos indica que el servicio estaría disponible a primeros de junio, aunque la fecha va variando ligeramente según avanza el proceso. Nosotros, en cualquier caso, estamos atentos a posibles mensajes de Jazztel y, además, periódicamente realizamos pruebas conectando el router para ver si se enciende el indicador marcado como “ADSL”, que indica que la conexión está disponible.

Sin embargo, pasan bastantes días y todo sigue sin funcionar. Finalmente, el día 8 de junio recibimos una factura de Jazztel, en la que se cobra por el servicio desde el día 2 de mayo al 14 de mayo, y se nos cobra una cuota de ADSL proporcional a 2 días, 13 y 14 de mayo, como si el servicio se hubiera activado el 13. Nosotros no recibimos ningún SMS y las pruebas que íbamos realizando eran todas con resultado negativo. Motivados por la llegada de la factura, realizamos una nueva prueba y comprobamos que el indicador ADSL no se activa. Llamamos ese mismo día a Jazztel.

La primera persona que nos atiende nos ayuda a realizar unas cuantas comprobaciones y pruebas durante un buen rato. Cuando nada de lo que probamos funciona nos transfiere con otra persona, que se identifica como un “técnico de nivel dos”, o una expresión similar. Seguimos haciendo pruebas con esta persona hasta que, finalmente, nos comunica que la distancia a la centralita parece excesiva y que nuestra línea no es adecuada para ofrecer el servicio ADSL, por lo que pasa a declarar la línea como no válida para ADSL, y nos asegura que se nos devolverá todo el importe del ADSL a partir de ese momento en cada factura. Le preguntamos si se nos va a devolver también el importe cobrado hasta ese momento. Este es el origen de casi todo lo que sucede a continuación. Lógicamente, desde nuestro punto de vista, habría que devolver todo lo cobrado por ADSL. Nunca se nos dio ese servicio, y el informe técnico que acredita que nuestra línea no es válida para ADSL supone una afirmación implícita de que no sólo no funciona a partir del día 8 de junio, sino que nunca lo ha hecho. Si nunca ha funcionado, no se nos debería facturar nunca, y lo facturado ha de devolverse. Sin embargo, la burocracia comienza de nuevo a partir de ese momento y el técnico dice que, para saber si se nos va a devolver todo lo cobrado hasta ese día, tendremos que consultarlo con una persona de facturación, que nos llamará a lo sumo en un plazo de una semana a partir de ese instante.

Como viene siendo costumbre y ya comenzamos a intuir, Jazztel realmente no nos llama. Nosotros esperamos y lo vamos dejando de lado hasta que, finalmente, decidimos llamar el día 27 de junio (20 días más tarde). La persona que nos atiende nos permite reclamar la devolución de cualquier factura posterior al 8 de junio, el día que la línea se declaró como no válida para ADSL, pero no antes. Esta es la primera noticia que tenemos en la que se nos niega explícitamente la devolución de lo cobrado antes del 8, por lo que mostramos nuestro desacuerdo. Como suele ser habitual en estos casos, la persona al otro lado de la línea parece incapaz de hacer nada por nosotros.

A partir de este día nuestras acciones se dividen en dos grupos. Por una parte, tenemos que seguir reclamando las facturas que nos van llegando, donde no se bonifica el importe del ADSL. Cuando recibimos una, tenemos que reclamarla y se indica que el importe se regularizará en la factura siguiente. Naturalmente, es bastante irritante que esto suceda así. Cuando nos dicen que se nos va a devolver todo lo cobrado por ADSL a partir del día 8, uno espera que eso suceda automáticamente y en la factura del mismo mes.

Por otra parte, realizamos dos reclamaciones a Jazztel para intentar que se nos devuelva lo facturado antes del día 8. Una primera reclamación por teléfono, donde relatamos nuevamente los hechos e insistiendo en la declaración de la línea como no válida para ADSL. Jazztel la rechaza. Realizamos una segunda reclamación por escrito a atencion.al.cliente@jazztel.com, exponiendo en detalle la cronología de los hechos y nuestra opinión de que lo justo es devolver todo lo cobrado. También es rechazada. A partir de aquí, lógicamente, las vías directas para hablar con la empresa se consideran agotadas, así que intentamos informarnos y ver qué se puede hacer para que se devuelva todo el importe.

Acudimos a la Junta Arbitral de Consumo del Principado de Asturias. Es una opción de la que ya habíamos oído hablar, y viene a ser como un juicio pero sin abogados y sin costes. Si la empresa decide que esa vía le parece adecuada, puede acudir y comprometerse a acatar la resolución del árbitro. Sin embargo, primero hay que comprobar si la empresa está suscrita a la junta arbitral, y la persona que nos atiende en la Junta nos indica que, quizás, la vía más adecuada para nuestra reclamación es la Oficina de Atención al Usuario de Telecomunicaciones, sita en Madrid.

El mecanismo es bastante sencillo: se escribe una carta o se envía un burofax a esta oficina. Ellos evalúan nuestro caso y la documentación aportada. Si lo creen conveniente, ponen una queja ante Jazztel en nuestro nombre, y Jazztel está obligada a responder en un plazo de 6 meses. Si resuelven la reclamación de forma negativa, habría que acudir a los tribunales. Cuando se empieza a hablar de meses de plazo uno comienza a sentirse defraudado y con la sensación de estar caminando entre fango cada vez más profundo, pero igualemente decidimos probar. Enviamos una carta el día 22 de julio, redactando nuevamente los hechos y adjuntando fotocopias de las facturas y todo el material que tenemos. No nos sale muy caro todavía.

La respuesta de Jazztel es sorprendentemente rápida. Se pone en contacto con nosotros el día 12 de agosto. Indicamos que estamos de vacaciones en el extranjero, así que se pondrán en contacto con nosotros a la vuelta. Mala suerte, pero todavía no he mirado cuánto costó esa llamada. En cualquier caso, llaman nuevamente el 24, preguntan si somos nosotros los que hemos puesto la reclamación y deciden que, lógicamente, por supuesto que tenemos razón. Se nos devolverá el importe facturado y, además, para no tener que reclamar la bonificación con cada factura, se tramitará la baja del servicio de ADSL+Llamadas inmediatamente, quedando únicamente el servicio básico de voz. Todo pinta muy bien, aunque esperamos la llegada de la siguiente factura.

En cualquier caso, son destacables varios hechos. Sin nada que ver con asuntos xenófobos, la persona que nos llama el día 12 y 24 y de agosto es una persona española, con acento español de España. Todas las personas con las que habíamos hablado anteriormente tenían acento sudamericano, posiblemente argentino. Cuando ocurre así, uno tiene la sensación de estar hablando con alguien subcontratado y que, sea como sea, tiene poco poder para decidir nada. Sin embargo, en agosto uno tiene la sensación de que ha hecho el ruido suficiente como para llamar la atención de alguien con poder para solucionar el problema.

Además, el importe reclamado es desde el día 13 de mayo al 8 de junio, y no supera los 30 euros. La queja a través de la Oficina de Atención al Usuario de Telecomunicaciones es una señal clara de que estás dispuesto a llegar relativamente lejos, y el paso siguiente es potencialmente más costoso para ellos que para ti. Recomiendo a cualquier persona que tenga este tipo de problemas, que haga lo mismo que hicimos nosotros. Si la reclamación es lógica y justa, hay que tomar buena nota de las fechas y los números de reclamación e incidencia que nos van dando. Si las reclamaciones directas a la empresa fallan, la Oficina de Atención al Usuario de Telecomunicaciones es una opción que ha funcionado a la perfección en nuestro caso.

También he de manifestar mi decepción con Jazztel. La empresa, a través de su sitio web cuando contratas uno de sus productos, manifiesta su compromiso de antender al cliente adecuadamente y dar una buena atención al usuario. Esta era sin duda una oportunidad para Jazztel de demostrar que, efectivamente, esto es así. Sin embargo, Jazztel, sea como fuere y debido a las razones que sean (mala organización interna de su servicio de atención al cliente, imposibilidad de los operadores de tomar la decisión adecuada, etc), nos rechazó una reclamación que tenía toda la lógica del mundo, negando tres veces como San Pedro[1]. A día de hoy podríamos estar contentos con Jazztel pensando en lo bien que nos han atendido y cómo han dejado que la razón caiga por su propio peso aunque perdieran algo de dinero. Sin embargo, la imagen que nos han dejado es la de una empresa que se comporta igual que las demás empresas grandes cuando surgen problemas: mirar para su propio bolsillo y negarse a actuar de forma lógica y justa hasta que una fuerza mayor se lo reclame.

[1] A pesar de ser agnóstico, creo que he conseguido adornar el texto con una referencia exitosa a un pasaje bíblico.

Advertisements

2008-11-16

Notes on Huawei E220 and mobile connections

Filed under: Hardware — rg3 @ 15:30

I’d like to start this post with an apology for not writing anything for several months. My job and other issues have kept me busy and I didn’t find a good moment to write a long post about something interesting.

As I have written in the past, I live in a place with very unusual circumstances from the telecommunications point of view, for several reasons. I can’t get DSL or cable here, and we’re almost in 2009, and the only proper way to get Internet access until some months ago has been a dial-up connection.

You may remember some time ago I became interested in the possibility of getting mobile internet access using a 3G modem. At that moment I went to a Vodafone shop, which was the only available option, and asked if I had 3G coverage in my area, and a nice lady told me it wasn’t available instead of convincing me into signing the contract and pay.

Fortunately for me, due to my job I had the possibility of testing one of those Vodafone modems by myself and see if it worked or not, and it turned out it worked. I tested both the Huawei E220 modem as well as the Huawei E172 modem. Both worked without any problems and gave me good connection speeds, at least compared to the dial-up connection I had been using! Knowing Vodafone worked, I studied other providers, and eventually settled with Orange if only because, for the same price as the other major operators here in Spain, the traffic limit is 5GB per month instead of 1GB, which is what most other providers are offering. I’m close to finishing my second month with them and I can tell you last month I downloaded about 2GB of data, so the choice has been mostly correct and my prediction that 1GB wouldn’t be enough became true.

Prices and connection quality

However, I’d like to share my opinion and observations now that I’ve been using it for two months. The first thing I’d like to share is that the prices are steep with most providers. You end up paying 45 euros a month for a connection that usually gives you around 1.x Mbps instead of the advertised 3.6 or 7.2 Mbps, and on top of that you are limited to 5GB a month. Compared to a DSL connection it’s very expensive, even if you can carry your connection around with you.

In my own experience with Orange (this may not apply to other providers or even areas of the country), connection quality varies a lot depending on the time of the day. Many times I start my connection at 20:00 or 21:00 and it works without any problems. However, from Monday to Friday and without moving the modem or touching it or doing anything special, the connection degrades a lot when it’s between 23:00 to 00:00, probably because at that moment many people connect to the Internet using the service. I know several people who, after a busy day, connect for some minutes at 23:00 to check their email and browse the web a bit before going to bed. That would explain why the service quality drops at that time, so I avoid downloading large files in those hours. The same happens during weekends as lunch time approaches. Early in the morning the connection is fine. Yes, this is a shame and upsetting, but I’d like to remind you that I need more than 1 GB and there’s no other method to get broadband here, so I’m stuck.

I also noticed in some cases, and mine in particular, the exact position of the modem and its orientation make a lot of difference in the link quality. Unfortunately, these modems are usually distributed with one very short USB-to-miniUSB cable or even no cable at all in the case of modem E172. That makes it very difficult to play with it to achieve gains. That’s why I bought a USB extension cable 3 meters long and, with a little bit of do-it-yourself, I managed to put the modem always in a high position close to a window. With these, I usually get four or five signal bars (out of five) instead of one or two if I put the modem on the floor next to the computer.

I took a picture of the final result. I used a long and extensible pole (2 meters of height), stuck a cube of cork on top, and used adhesive tape to attach the modem to it. Cheap and ugly, but it works wonders. Thank you very much to the guy who game me that advice. The extension cable cost me about 3 euros and if you have link quality problem it’s probably worh trying. With adhesive tape you can stick it to the window crystal temporarily and test if it improves your signal before using other more aggressive methods.

3G modem

Linux

The modem works flawlessly in Linux, as it has been widely reported around the web. It’s true that many of these modems work better with the PIN disabled. I had no problem with the PIN set and modem E220, but modem E172 works better without it. With the PIN enabled, it only established a connection once every several attempts. My personal advice is, then, to disable the PIN to make sure it works if you don’t mind. Take into account you should then make sure your modem is not stolen or anyone could use it for free.

It can be configured to work with wvdial, kppp and many other dialing tools. I set it up using bare pppd and sample scripts I found on the web. For most of these modems, the scripts are all the same and you only need to change the username and password, which vary from provider to provider and are usually the provider name (in my case, orange/orange), and the so-called “Internet APN”, which varies from provider to provider too. In my case and some others, it’s “internet”, but in Vodafone Spain it’s “ac.vodafone.es”, for example. You’ll have to find out with a web search.

As an example, I’m going to post my two pppd files.

/etc/ppp/peers/3gmodem:

/dev/ttyUSB0
460800
crtscts
modem
noauth
#usepeerdns
defaultroute
noipdefault
debug
#noccp
#nobsdcomp
#novj
#mtu 500
user "orange"
password "orange"
connect '/usr/sbin/chat -f /etc/ppp/chat-3gmodem'

/etc/ppp/chat-3gmodem:

ABORT BUSY
ABORT ERROR
ABORT 'NO CARRIER'
REPORT CONNECT
TIMEOUT 10
"" "ATZ"
OK AT+CGDCONT=1,"ip","internet"
OK "ATE1V1&D2&C1S0=0+IFC=2,2"
OK "AT+IPR=115200"
OK "ATE1"
TIMEOUT 60
"" "ATD*99***1#"
CONNECT \c

I would then connect calling pppd call 3gmodem nodetach as root, or maybe set up a loop to make it redial when the connection is lost.

2008-03-06

Loading times

Filed under: Hardware,Software — rg3 @ 18:15

Since the invention of the personal computer, the speed and capacity of those machines have been improving a lot and they are now much faster and better than they were many years ago. These performance improvements have affected several components. The most important ones for general usage nowadays are the CPU and the memory system, which includes the different CPU cache memory levels, the RAM memory and the hard drive. The increase in performance has been bigger in the components closer to the CPU. Hard drives have had many speed improvements in access, read and write performance, but it’s next to nothing compared to the improvements in the CPU speed.

If we don’t take into account the latest MS Windows version and some special types of programs like 3D games, in my opinion computers have been good enough for a few years now. Any computer built, let’s say, in the last four years, is able to run Windows XP or a decent Linux distribution with an office suite, a communications suite and several other programs without any problems. You can have them running at the same time in your computer and they don’t bring it to its knees. Right now I’m typing this from a laptop computer with a single-core Sempron CPU, 1 GiB of RAM and a 60 GB hard drive. The majority of the hard drive space is being taken by personal data like pictures and videos, not the system itself, the CPU is being barely used at this moment and I have a Slackware Linux system running with the latest KDE 3.x, Kontact (a communications suite with calendar, to-do list, notes, e-mail, etc), Mozilla Firefox and several other programs. The amount of memory being used by the applications, according to the KDE Information Center and the “free” tool, is about 160 MiB. I could perfectly run OpenOffice.org too if I wanted and still not use any swap memory. There’s plenty of memory available. And that’s what most computers are being used for. Be it at home or in offices, they are used to browse the web, communicate using email or IM programs, send and receive files, watching pictures and videos and composing documents.

All these improvements have allowed us to create more complex systems, normally with the single purpose of making other people’s lives easier, starting by the people who create the software itself. In the early days you couldn’t do this, but now you can have libraries, and more libraries on top of them, and a third library layer and a lot of abstractions and a complete API so writing an application becomes much easier. This means that the application is easier to maintain and easier to create, and those two tasks take less time and effort, allowing the software to be cheaper and more easy to use, because once you have a solid framework to work fast, you can concentrate on other types of improvements like making the application easier to use or more intuitive.

So we know that, for now, we don’t need much more computing power in most cases, and we can guess there is not much room for making a word processor or web browser better. They are what they are. You can point and click, save bookmarks, type seeing how the document will be on paper, correct mistakes before printing it on paper, save a lot of time by creating and applying styles, make it create and update a table of contents automatically… so where can we improve? Is there anything in which we haven’t seen a real improvement? Something we should have paid a little bit of attention to and we didn’t? Some unavoidable problem derived from all of this that we could try to solve? In my opinion, yes: the loading times. There’s a system with several layers of libraries and other software in the middle and at the top of it there’s an application. That complex base makes it easier to write and maintain the application, but it also means it’s bigger byte-wise, more things need to be initialized and read from disk, and while the systems we have are perfectly capable of running everything we need to run at the same time, it still takes too much time to get the program started. Once it is running, fine. While loading, not so fine.

The time it takes to boot an operating system today is worse than it was in the early days, due to the number of systems that need to be initialized and the incredible variety of the hardware it needs to detect and support, among other factors. The first Unix systems booted in seconds. Now they may take minutes. This is an obvious problem, and people have been trying to solve it and improve the situation in the latest years. Windows XP takes less time to boot than Windows 98, normally. Ubuntu Linux, for example, is now trying to move away from the classic SysV init (and others) and replacing it with “upstart”, to boot more efficiently. Someone invented “suspend to RAM” and “hibernate”. In the first case that means booting in a few seconds to a completely usable system with everything started and ready to run, as we had it running before. Windows has SuperFetch, based on the standard Windows Prefetcher, which can be tweaked through the registry in Windows XP, for example. Linux has Preload and Prelink, and Mac OSX had Prebinding, according to the Wikipedia now abandoned because it didn’t really give noticeable performance improvements.

So, yes, people have been well aware of the loading time problems and have been trying to create hacks and find ways of circunventing the problems derived from having a complex base system that makes our lives easier. For some types of applications, loading times are not a problem. For example, I can start an XTerm with Mutt almost instantly once the files are in the disk cache (in memory). Yet KMail takes several seconds to load, even when it’s already cached. Which one will I use? While Mutt is no worse than great, and I say it from my own experience, it doesn’t have many very useful features KMail has (God bless the quick search bar), and I’d prefer to use KMail if possible. Firefox takes four seconds to start in this machine when cached, and OpenOffice.org… let’s not go in there. Once they are running, no problem, I have plenty of memory available. Like I said, the problem lies in the loading time.

The approach I use to minimize this problem is relatively simple. You may have your own, here’s mine. I use suspend-to-RAM as much as possible. It takes a little bit of power but, let’s face it, “booting” in 5 seconds to a completely prepared system is a very spectacular and real improvement. On top of that, I noticed some time ago that KMail had an option to place itself in the system tray, invisible until you clicked on it (Windows users can take uTorrent as an example of this behaviour), and when you use it that way and want to write an email, you just click on the system tray icon and it comes back instantly. When you close the window, it goes back to the system tray instantly. Instant-on and instant-off. This is the obvious solution. If we have a lot of memory and running all of them at the same time is not a problem, yet the loading time for many of them is a problem, you do the obvious: keep all of them running all the time. If you use suspend-to-RAM too, they can run for ages, only stopping at the eventual reboot that comes with some system upgrades (like kernel upgrades in the Linux case). As I launch them when I enter my session, their total launch time is added or accounted as part of the boot time, but real reboots are only needed once in a blue moon. Some applications have a built-in system to do this, like I mentioned uTorrent, KMail, but also OpenOffice.org for Windows (and maybe Linux, I don’t remember) has a pre-load system, and Firefox had (or has?) a similar system under Windows. Some of them don’t have it, but they are not a problem provided you have tools like “ksystraycmd”, a KDE tool that will allow you to place any windowed program in the system tray. Thanks to it, Firefox starts and stops instantly too, as well as other applications (I could use it with OpenOffice.org Writer and Calc if I used them frequently). Some of you may be wondering what’s the difference between this and keeping the application minimized. The answer is that while the application sits on the system tray, it doesn’t appear in the task bar and it’s not present, for example, in the Alt+Tab menu. So when you send it to the system tray, you can effectively forget about it, and it doesn’t interfere in your usage of other programs you may be running at a given time. When you bring them back from the system tray you can minimize them like other applications, or keep the window open, yet you can at any time say “I’m done for now with it, so put it somewhere else until I need it again in 15 minutes”. This separation of concepts between “minimization” and “iconification” (to the system tray in this case) also marks the difference between this tool and other iconification systems like the ones from CDE or WindowMaker, where “iconification” is only an aesthetic difference to “minimization”, in the sense that iconified windows are still considered active, and iconifying a window is the only way of hiding it.

kde-systray-and-applets.png

There is also a very interesting application called Yakuake, which is a Quake-console-style terminal application for KDE. It sits invisible on the top of the screen until you press a hotkey or key combination and then it comes down. It’s very efficient in order to have a terminal permanently running. Some time ago the main Yakuake developer was considering extending this functionality beyond terminals, and make Yakuake handle any type of application, so you could bring any application down with the appropriate hotkey and keep them running all the time. When I read about it I didn’t give it much importance, but now I do and I think that was indeed a great idea, that probably won’t be implemented any time soon. With the arrival of KDE4, maybe it will now be easier to write such an application. I imagine having a very thin top screen bar with several applications that can notify me of activity through that bar (like KMail getting new mail), they can only be brought down using a key combination, without interfering with the other active applications from the taskbar. I think that would be very nice, practical and a good way to have instant on and off for a defined set of applications. Who knows. Maybe some decent programmer (or me if that fails) will create such an application in the future.

Keeping an application running all the time only has one problem: sometimes the application is not very nice on the system memory or has memory leaks (there are many claims on Firefox having serious memory leak problems, but I haven’t noticed any real ones — I use the Flash and MPlayer plug-ins and the Flashblock extension). To keep an eye on that, KDE has a very nice applet to monitor CPU and memory usage at a glance from time to time. Remember: your computer has a lot of memory, so make good use of it. Nowadays, when someone buys a new computer they expect very short loading times and everything starting in a fraction of a second (like in movies). There are a few tools and tricks which can help you achieve this goal most times, so give them a try.

2007-12-06

Digitalizing vinyl discs

Filed under: Hardware — rg3 @ 20:14

I recently became interested in the topic of digitalizing vinyl discs to copy the music my family had in that format to a digital storage like the hard drive in my PC, an iPod or a CD. Doing it is actually very simple, but the amount of specialized devices, some of them with very steep prices, cables and connectors make it look much harder. I will try to condense here the knowledge I’ve been gathering after reading several websites and listening to several experts on TV and radio shows.

Expensive hardware and software not needed

It is probably hard to make the creator of a 120 euros device admit that the device they are selling doesn’t do miracles and isn’t really worth it unless you are a professional, but they should admit it. You may find these devices on music equipment catalogs from time to time, available for purchase by the general public, and bragging about how they can record music taking 64-bits floating point samples at 96 KHz, convert them to MP3 on the file and transferring them to your PC easily.

And then we also have expensive audio processing software capable of removing noise, clicks and, again in general, doing impossible miracles to improve the audio quality of your audio files when post-processing them after they have been recorded from your vinyl discs.

Now, I’ll tell you something obvious and intuitive everybody already knows or supposes, but let’s make it clear: No device or sophisticated algorithm beats cleaning the vinyl discs. That’s right, let’s repeat it again. You can buy expensive hardware and expensive software and spend hours cleaning your recordings and removing clicks and noises and hums and whatever. But nothing is as time-efficient and quality-efficient as cleaning your vinyl discs and your turntable. 99% of the final quality depends on this, and miracles don’t exist.

What you need

  • A turntable (obvious) connected to an amplifier. Most people don’t have a turntable alone. If you used to listen to music in vinyl, where did you connect your speakers or headphones? Where did you change the volume? That’s the amplifier. It takes the music signal from the turntable and amplifies it to send it to the speakers, among other tasks.
  • A computer with a line-in jack. Most computers and soundcards nowadays have this. My cheap Realtek integrated soundcard has a line-in jack. The line-in jack is usually represented or marked as a group of parenthesis with an arrow crossing them and pointing inside. Ugly text representation: (( ))<--.
  • A cable to connect the amplifier to your PC. The connectors in the cable endings depend on how your are going to connect the amplifier to the computer, but there are only two possibilities in general. It will be either an rca-to-jack cable in the best case (we will clarify that) or a jack-to-jack cable in the worst case. None of those cables should cost more than 5 euros, and should be available in any music equipment shop. I bought a jack-to-jack cable for other purposes some days ago and its price was 2.5 euros.
  • Two pieces of software which are a mixer that should let you adjust the volume of the different soundcard components (Windows mixer, kmix, amix, alsamixer, etc…) and a recording program. There are many decent and free recording/edition programs out there, like Audacity, rec (from SoX), arecord and others.

As you see, the total software cost could be zero and the total hardware cost below 5 euros.

Connecting your equipment

Probably, you already have your turntable connected to your amplifier, and some speakers connected to the amplifier. Now you need to connect the amplifier to the computer’s like-in jack, which is probably a small audio jack, so one of the cable endings should have a small audio jack. So the next question is where to connect the cable to, in the amplifier. There are three possibilities.

Safe bet: Rec-Out

You should first check if your amplifier has “Rec-Out” plugs. Sometimes, these are tagged with other words like “Line-Out” or “Tape-Out”. Do not confuse them with “Audio-Out” or “Headphones”. Probably, they are in the back of the amplifier if they exist, and next or near the plugs you use to connect the turntable to the amplifier. If they exist, they are probably RCA connectors, so you need an RCA-to-Jack cable.

This is the safe bet because the signal you get from those connectors has a standarized level, independent of the amplifier volume settings, and it’s safe to connect that to the computer’s line-in. Nothing should break or burn.

Unsafe bet: Headphones

If you have bad luck and your amplifier doesn’t have “Rec-Out”, you could still use the headphones plug to get the audio signal. However, this is unsafe because the signal level, the intensity of the electrical flow going through the cable and into the sound card, depends on the volume settings in the amplifier, and it can be very high. High enough to damage the computer’s sound card, specifically. Still, if you set the volume to zero and then increase it very slowly, your card will probably be alright, but be very careful. I used this connection in one of my tests and everything was fine setting a volume close to zero. I didn’t fry my sound card, but it’s your risk. You would need a Jack-to-Jack cable, and optionally a big-jack to small-jack adapter if the headphones output is the big one and your cable has the small one. These are also very cheap.

No-no bet: Audio-Out

Never use these plugs. It’s meant for speakers, and you will probably damage your sound card. Did I already mention you shouldn’t use Audio-Out? Sorry, sometimes my memory fails and I don’t remember if I said that you shouldn’t use Audio-Out. By the way, you shouldn’t use Audio-Out.

Minimizing noise, clicks and jumps

I would recommend to wash your vinyl discs before recording their audio. You can wash them with water, neutral soap and your bare fingers, and I think it’s better to let them dry and maybe use a hair drier if it has a mode to blow cool air instead of hot air. If you dry the final drops or the whole disc with something like a towel or another piece of fabric you must do it softly and making sure it doesn’t leave any fabric trails behind.

You can also clean your turntable to remove dust and being careful enough not to damage it. Some turntables also let you tune the rotating speed. Mine for example lets me select between 45 and 33 1/3 revolutions per minute, but in any of those two modes I can also fine-tune the rotating speed. The rotating platforms has some marks in the side and an orange light. According to my manual, the optimum speed is when the bottom side marks appear not to move while the platform rotates. Maybe your turntable has something like this. Read its manual, if you still have it, for more details.

Finally, make sure your turntable’s ground cable is connected to a ground connector. Most amplifiers have one in the back. I mention this because I was getting a constant background noise and I didn’t know what was going on until I detected the free cable and saw the noise changing when I touched the cable ending. Chances are your turntable is already properly connected to the amplifier, but I had to mention it. There is also an uncommon but weird problem with looped ground connections. I read it mentioned in the Audiotoolers website. See the reference links for more details if you have a background noise problem.

Preparing your software

You should use the mixer to tell the card to record from the line-in connection instead of the microphone connection or other sources. The recording volume must be low in general. This part is a bit tricky. For example, my card has a line-in volume level. If I don’t mute the line-in, I would hear what comes through the line-in in the computer’s headphones or speakers, but this level doesn’t really match the recording level, which, in my soundcard, is controlled apart. So the only way to properly verify the audio volume was right was to perform a test record and then playing what I recorded.

If you set the recording volume too high, the signal will be saturated and it will sound no better than horrible. Just for reference, my settings had the capture (record) level to around 25% in both the line-in and capture controls. Do not take my numbers as a reference, but start with something of low volume and go increasing slowly. Do a test record and play it to verify the volume is fine. If you used the headphones connection, those are two different volume controls already and you must balance them, setting both to low levels if possible, specially the one in the amplifier as we mentioned previously.

Record

To test the volume and record in general, use a recording application like the ones I mentioned above. I’d record in Stereo mode, with 16-bit samples and at 44100 Hz. This is the type of sound that CDs use. Save what you record to WAV format and you will be able to easily burn an audio CD with it, or convert them to MP3 later to use with in your iPod or similar. Please refer to the recording application manual and documentation to find out how to indicate these parameters. Depending on the application you use, it may be easier to play the disc and store the full side audio on a file and then split it into chunks or split it while you record and listen to your discs.

Advanced trick

One of the guides I found online also mentioned the possibility of increasing the recording quality by wetting the disc’s surface with a mix of 75% water and 25% of alcohol, using a spray gun. It is a known method to remove more background noise, but some people are against it, and it has some drawbacks. For example, if the turntable isn’t well isolated, maybe the mix will slip inside the turntable and damage its circuits, and the alcohol can also dissolve the glue that attaches the diamond nail to the arm. The discs would also need to be cleaned after being recorded, because when domestic alcohol evaporates after some minutes or hours, it leaves traces behind in the disc surface of other substances, and they are a problem. I didn’t use this trick, but if you decide to use it, mail me commenting your results, please.

References

  1. http://en.wikipedia.org/wiki/RCA_connector
  2. http://en.wikipedia.org/wiki/TRS_connector
  3. http://www.bbc.co.uk/dna/h2g2/A810091
  4. http://www.hispamp3.com/tallermp3/como/f_vinilo1.shtml
  5. http://www.audiotoolers.com/ubbt/ubbthreads.php/ubb/showflat/Number/7364
  6. http://www.audiotoolers.com/ubbt/ubbthreads.php/ubb/showflat/Number/6553

2007-04-03

Software suspend under Linux

Filed under: Hardware,Software — rg3 @ 22:45

Suspending a computer means to turn it off in a special way, so that when you power it on again it resumes what it was doing, like nothing had happened. There are two common ways of suspending a computer: suspending to RAM and suspending to disk.

Suspend to disk

Suspending to disk is the “simplest” way of suspending a computer, also called “hibernation”. The operating system kernel stops what it is doing and writes the contents of the system RAM to a specific place in the hard drive. When a computer is running, the system memory contains all the information needed for it to run. It contains the kernel image, the kernel data structures that allow programs to run, the state of the programs currently running on the computer, etc. To sum up, a lot of data that represents the state of the computer at a given moment. By writing those contents to the hard drive, the operating system is writing a snapshot of the system in that moment to a safe place. After writing that snapshot, if proceeds to power off the machine in the same way it normally powers off.

There’s no need for special or unusual hardware support as far as I know (please, mail me if I’m wrong). If you can power on and off normally, you should be able to suspend to disk, provided you have enough space in your hard drive to store the RAM image and your OS kernel has suspend to disk support. When the computer powers on again, it checks if there’s a saved image. If there is, it restores the contens of that image to the system RAM, and continues execution at the point it left. Like if you had never turned it off. It’s not that simple because there are some things that won’t work after resuming. For example, if you suspend for a long time, which is usually the case, the network connections probably won’t work after resuming and will need to be reestablished. In other words, the computer can deceive itself and assume it’s not been powered off, but other computers will probably notice. There are some more issues. For example, when the computer boots again, it’s a normal boot up to the point when the operating system kernel starts restoring the image. First comes the BIOS, then the bootloader loads the kernel and the kernel starts booting, which is when it checks if there’s a saved image. But… what if you had upgraded the kernel and when it boots again, the new kernel is loaded? Or what happens if you change some hardware while the computer is suspended? Or what if you boot another operating system, maybe from a live-CD, and touch the filesystems or repartition your hard drive? A disaster. In many of these cases, as the Linux software suspend documentation says, you can “kiss your data goodbye”. So be careful and always power off or reboot normally when you want to do these things. This is specially critical about kernel upgrades, which is the most common operation from those mentioned. After installing a new kernel, reboot immediatelly to load it.

That said, many Linux distributions have suspend to disk preconfigured and handle that common kernel upgrade case. When you install a new kernel, maybe a “flag file” is created somewhere and when you try to run one of the suspend-to-disk scripts, they detect the flag file and refuse to suspend. In any case, it’s always a good idea to reboot after installing a new kernel. Furthermore, if you have installed it, you want to use it, don’t you?

To suspend to disk under Linux you don’t really need any special software package as long as your kernel has been passed the appropriate parameters on boot (resume=…) and you do the right thing. Essentially, you need to specify a swap partition to save to and restore the disk image from, and then run a couple of things:

echo shutdown >/sys/power/disk
echo disk >/sys/power/state

But there are more ways to do it (s2disk), and you probably need to run a couple of things before and after suspending. This is because, like I mentioned, not everything is restored properly. Most people would want to bring down the network interfaces before suspending and bring them up after resuming, etc. Also, some kernel modules and programs don’t like suspending (proprietary NVIDIA kernel module maybe?), and you may also need to reload and relaunch those kernel modules and programs. For example, my Acer laptop has a winmodem that uses the snd_intel8x0m kernel module and the slmodemd daemon. They need to be reloaded, so I have to kill the daemon, unload the module, reload the module and restart the daemon when I suspend. For this reason, and because I use Slackware and need to handle all of this myself, I created a short (24 lines) suspend-to-disk shell script that automates everything, and it’s the one I call if I want to suspend to disk.

If, instead of the vanilla kernel software suspend to disk you’re using Suspend2, you should probably configure your hibernate script and use it, because it may be easier. Remember to read your distribution manual and see what they have done for you and how everything is set up. This way you’ll know what you need to run if you want to suspend to disk.

In terms of time and power, suspend-to-disk isn’t specially fast, but it’s worth using it unless you can suspend to RAM. When the computer powers off, it really powers off. You may even unplug it if you want, and it won’t drain the batteries in case it’s a laptop computer. The time it takes to save and restore the image on disk depends on several factors, which are usually your disk speed, your CPU speed and if you’re compressing or encrypting the image or not. Eventhough it sounds weird or unintuitive, it’s usually faster to compress the image. This is because in modern computers, the CPU can compress the image faster than the hard drive can write it, so compressing it will use more CPU power but will take less time, because the amount of data to write will be greatly reduced. The same applies when resuming, because the CPU will be able to decompress the image faster than the hard drive will read it.

It will probably take around 10-20 seconds to power off and between 20 and 50 seconds to power on (around 35 in my case) since you press the power on button on the computer. A fast BIOS helps to boot faster, so does having a fast hard drive and a fast CPU. In general, unless your system is very well tuned and boots up very quickly, suspending to disk will benefit you and cut your boot time to one half or less. You will have to experiment and see it for yourself. Also, let’s not forget that when it has finished booting, you will have everything running like it was before. The disk cache will also be in place. In other words, the restored system will be as “responsive” as it was before suspending. This is another good reason for considering suspending to disk instead of simply powering off.

In terms of power, which is important if you use a laptop, I mentioned before that the system powers off really and will not drain the battery power. But let’s not forget that those 10-20 seconds in which the computer is writing the image to disk and the 20-50 seconds it takes to restore it are peaks in power usage, because the disk and probably the CPU will be working intensely. Those total 60 seconds of high power usage may use the same as 5 minutes of power being idle, or something similar, but not much more.

Suspend to RAM

Suspend to RAM means to power off almost everything in the computer except for the system RAM. That way, the state of the current running system is preserved while you save power by disconnecting the CPU, the hard drive, etc. There’s no need to lose time saving and restoring great amounts of data, but simply to disconnect and reconnect components. Hence, this is the fastest method. My Acer laptop can suspend to RAM in less than 10 seconds, and be restored in that same amount of time (7.5 seconds average using a stopwatch). You can think of suspend to RAM like going into standby mode.

However, suspending to RAM needs special cooperation from the computer hardware. When you resume, it’s not a normal boot (if you can still call it “boot”), and there are many things that may go wrong. The most typical problems involve the graphics card not restoring properly, and you may end up with a blank or corrupted screen after resuming, eventhough the computer keyboard and mouse may still work (or may not). From the software point of view you face the same problems as when suspending to disk, that is, you’ll probably want to bring down network interfaces, unload problematic modules, etc.

For these reasons, suspending to RAM does not always work. I think you can suspend to RAM using Suspend2 but I’ve only tried the vanilla kernel software suspend using the s2ram program from the suspend package. The name of the actual package depends on the distribution, so you better check which package contains the s2ram program and install it. The program maintainers are very optimistic and they always tell you to keep trying, because s2ram has several options to try different approaches and you may find one that works. Sometimes the program may tell you your computer is not in the whitelist and refuse to run, but you can force it to try and test several different methods, and one of them may actually work, so don’t lose hope. In case you make it work with a non-whitelisted system, they ask you to report the success and the method you forced, so they can whitelist your system in future versions.

Like before, in my Slackware system I wrote a specific script for myself (23 lines), which is identical to the one that suspends to disk but replaces the 2 critical commands with one call to s2ram. It’s the one I invoke when I “close” the laptop, that is, the one associated with the LID button event.

Suspending to RAM does use some power. Remember that your memory stays connected while the computer is suspended. For reference, my laptop can survive suspended for more than 12 hours when there’s only half an hour of battery left. If you’ve got a battery that lasts for 4 or 6 hours and it’s full, the computer may well survive for several days, but not many. Of course, if it’s a desktop computer and/or you leave it plugged in, it can stay so for as long as you want or your electric company lets you. I haven’t noticed the power on and off process having any peaks in power usage. It simply powers off to “standby” and on to normal.

As with suspending to disk, the system will be as responsive as just before suspending. Due to the low power requirements and the minimum time it takes to restore the system, suspending to RAM is usually a good alternative to leaving the system idle turned on, unless you want it to keep downloading something or completing some calculations.

Suspend to both

Suspend to both is useful if you can get suspend to RAM to work. Before suspending, it saves a RAM image to disk (like in suspend to disk) and then tries to suspend to RAM. If there’s some problem and the power is interrupted while suspended (because you run out of battery power, for example), it will restore the disk image when booting. Otherwise, it resumes like in suspend to RAM and discards the disk image. I haven’t found this useful, but I’m sure many people will do in their specific situations and environments.

Next Page »

Blog at WordPress.com.