amantecayotl y otras ciencias

amantecayolt, se traduce del nahuatl como ciencias de la tecnología ó tecnología

martes, 23 de junio de 2015

Windows 2003, servidor de horario

-- traducido

El servicio W32time es el encargado en mantener sincronizado el relog de la computadora. Especialmente para asegurar la autentificación de kerberos dentro del active directory.

W32time esta basado en la Simple Network Time Protocol (SNTP) RFC 1769.

1Locally connected hardware clock (optional) ó Internet time server (optional)
2PDC Emulator in forest root domain
3Other domain controllers in forest root domain ó PDC Emulators in child domains
4Workstations and member servers in forest root domain ó Other domain controllers in child domains
5Workstations and member servers in child domains

Sincronizacion con un servidor de Internet

Este registro determina la forma de sincronización. Cambiar de NT5DS a NTP.


Este registro indica cuando una computadora esta marca con un posible servidor de tiempo. Solo es valido cuando el registro anterior es NTP. Cambiar es 10 a 5.


Es una lista delimitada por el caracte espacio de servidores de tiempo. Pueden ser nombre DNS o direcciones IP. Si son DNS se debe agregar al final ,0x1 al final de cada nombre. Ejemplo cambiar de,0x1 a,0x1
Reiniciar el servicio con

net stop w32time
net start w32time
Y para acelerar la sincronización con este comando
w32tm /resync /rediscover

Buenas recomendaciones para programadores

Encontre varios articulos que se me hicieron interesantes en

Pongo un resumen de los puntos que mas me llamarón la atencion:

Daily Builds Are Your Friend

by Joel Spolsky
Saturday, January 27, 2001
multiple developers and testers, you encounter the same loop again, writ larger (it's fractal, dude!). A tester finds a bug in the code, and reports the bug. The programmer fixes the bug. How long does it take before the tester gets the fixed version of the code? In some development organizations, this Report-Fix-Retest loop can take a couple of weeks, which means the whole organization is running unproductively. To keep the whole development process running smoothly, you need to focus on getting the Report-Fix-Retest loop tightened.
One good way to do this is with daily builds. A daily build is an automatic, daily, complete build of the entire source tree.
Automatic - because you set up the code to be compiled at a fixed time every day, using cron jobs (on UNIX) or the scheduler service (on Windows).
Daily - or even more often. It's tempting to do continuous builds, but you probably can't, because of source control issues which I'll talk about in a minute.
Complete - chances are, your code has multiple versions. Multiple language versions, multiple operating systems, or a high-end/low-end version. The daily build needs to build all of them. And it needs to build every file from scratch, not relying on the compiler's possibly imperfect incremental rebuild capabilities.
Here are some of the many benefits of daily builds:
  1. When a bug is fixed, testers get the new version quickly and can retest to see if the bug was really fixed.
  2. Developers can feel more secure that a change they made isn't going to break any of the 1024 versions of the system that get produced, without actually having an OS/2 box on their desk to test on.
  3. Developers who check in their changes right before the scheduled daily build know that they aren't going to hose everybody else by checking in something which "breaks the build" -- that is, something that causes nobody to be able to compile. This is the equivalent of the Blue Screen of Death for an entire programming team, and happens a lot when a programmer forgets to add a new file they created to the repository. The build runs fine on their machines, but when anyone else checks out, they get linker errors and are stopped cold from doing any work.
  4. Outside groups like marketing, beta customer sites, and so forth who need to use the immature product can pick a build that is known to be fairly stable and keep using it for a while.
  5. By maintaining an archive of all daily builds, when you discover a really strange, new bug and you have no idea what's causing it, you can use binary search on the historical archive to pinpoint when the bug first appeared in the code. Combined with good source control, you can probably track down which check-in caused the problem.
  6. When a tester reports a problem that the programmer thinks is fixed, the tester can say which build they saw the problem in. Then the programmer looks at when he checked in the fix and figure out whether it's really fixed.
Here's how to do them. You need a daily build server, which will probably be the fastest computer you can get your hands on. Write a script which checks out a complete copy of the current source code from the repository (you are using source control, aren't you?) and then builds, from scratch, every version of the code that you ship. If you have an installer or setup program, build that too. Everything you ship to customers should be produced by the daily build process. Put each build in its own directory, coded by date. Run your script at a fixed time every day.
  • It's crucial that everything it takes to make a final build is done by the daily build script, from checking out the code up to and including putting the bits up on a web server in the right place for the public to download (although during the development process, this will be a test server, of course). That's the only way to insure that there is nothing about the build process that is only "documented" in one person's head. You never get into a situation where you can't release a product because only Shaniqua knows how to create the installer, and she was hit by a bus. On the Juno team, the only thing you needed to know to create a full build from scratch was where the build server was, and how to double-click on its "Daily Build" icon.
  • There is nothing worse for your sanity then when you are trying to ship the code, and there's one tiny bug, so you fix that one tiny bug right on the daily build server and ship it. As a golden rule, you should only ship code that has been produced by a full, clean daily build that started from a complete checkout.
  • Set your compilers to maximum warning level (-W4 in Microsoft's world; -Wall in gcc land) and set them to stop if they encounter even the smallest warning.
  • If a daily build is broken, you run the risk of stopping the whole team. Stop everything and keep rebuilding until it's fixed. Some days, you may have multiple daily builds.
  • Your daily build script should report failures, via email, to the whole development team. It's not too hard to grep the logs for "error" or "warning" and include that in the email, too. The script can also append status reports to an HTML page visible to everyone so programmers and testers can quickly determine which builds were successful.
  • One rule we followed on the Microsoft Excel team, to great effect, was that whoever broke the build became responsible for babysitting builds until somebody else broke it. In addition to serving as a clever incentive to keep the build working, it rotated almost everybody through the job of buildmaster so everybody learned about how builds are produced.
  • If your team works in one time zone, a good time to do builds is at lunchtime. That way everybody checks in their latest code right before lunch, the build runs while they're eating, and when they get back, if the build is broken, everybody is around to fix it. As soon as the build is working everybody can check out the latest version without fear that they will be hosed due to a broken build.
  • If your team is working in two time zones, schedule the daily build so that the people in one time zone don't hose the people in the other time zone. On the Juno team, the New York people would check things in at 7 PM New York time and go home. If they broke the build, the Hyderabad, India team would get into work (at about 8 PM New York Time) and be completely stuck for a whole day. We started doing two daily builds, about an hour before each team went home, and completely solved that problem.

The Joel Test: 12 Steps to Better Code

by Joel Spolsky
Wednesday, August 09, 2000
 Have you ever heard of SEMA? It's a fairly esoteric system for measuring how good a software team is. No, wait! Don't follow that link! It will take you about six years just to understand that stuff. So I've come up with my own, highly irresponsible, sloppy test to rate the quality of a software team. The great part about it is that it takes about 3 minutes. With all the time you save, you can go to medical school.
The Joel Test
  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?
The neat thing about The Joel Test is that it's easy to get a quick yes or no to each question. You don't have to figure out lines-of-code-per-day or average-bugs-per-inflection-point. Give your team 1 point for each "yes" answer. The bummer about The Joel Test is that you really shouldn't use it to make sure that your nuclear power plant software is safe.
A score of 12 is perfect, 11 is tolerable, but 10 or lower and you've got serious problems. The truth is that most software organizations are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time.

Painless Functional Specifications - Part 1: Why Bother?

by Joel Spolsky
Monday, October 02, 2000
 Why won't people write specs? People claim that it's because they're saving time by skipping the spec-writing phase. They act as if spec-writing was a luxury reserved for NASA space shuttle engineers, or people who work for giant, established insurance companies. Balderdash. First of all, failing to write a spec is the single biggest unnecessary risk you take in a software project.

 Let's visit two imaginary programmers at two companies. Speedy, at Hasty Bananas Software, never writes specs. "Specs? We don't need no stinkin' specs!" At the same time, Mr. Rogers, over at The Well-Tempered Software Company, refuses to write code until the spec is completely nailed down. These are only two of my many imaginary friends.

 peedy decides that the best way to provide backwards compatibility is to write a converter which simply converts 1.0 version files into 2.0 version files. She starts banging that out. Type, type, type. Clickety clickety clack. Hard drives spin. Dust flies. After about 2 weeks, she has a reasonable converter. But Speedy's customers are unhappy. Speedy's code will force them to upgrade everyone in the company at once to the new version. Speedy's biggest customer, Nanner Splits Unlimited, refuses to buy the new software. Nanner Splits needs to know that version 2.0 will still be able to work on version 1.0 files without converting them. Speedy decides to write a backwards converter and then hook it into the "save" function. It's a bit of a mess, because when you use a version 2.0 feature, it seems to work, until you go to save the file in 1.0 format. Only then are you told that the feature you used half an hour ago doesn't work in the old file format. So the backwards converter took another two weeks to write, and it don't work so nice. Elapsed time, 4 weeks.

Now, Mr. Rogers over at Well-Tempered Software Company (colloquially, "WellTemperSoft") is one of those nerdy organized types who refuses to write code until he's got a spec. He spends about 20 minutes designing the backwards compatibility feature the same way Speedy did, and comes up with a spec that basically says:
  • When opening a file created with an older version of the product, the file is converted to the new format. 

The spec is shown to the customer, who says "wait a minute! We don't want to switch everyone at once!" So Mr. Rogers thinks some more, and amends the spec to say:
  • When opening a file created with an older version of the product, the file is converted to the new format in memory. When saving this file, the user is given the option to convert it back.
Another 20 minutes have elapsed.

Total elapsed time for Mr. Rogers: 3 weeks and 1 hour. Elapsed time for Speedy: 4 weeks, but Speedy's code is not as good.

The moral of the story is that when you design your product in a human language, it only takes a few minutes to try thinking about several possibilities, revising, and improving your design.

So that's giant reason number one to write a spec. Giant reason number two is to save time communicating. When you write a spec, you only have to communicate how the program is supposed to work once. Everybody on the team can just read the spec.  When you don't have a spec, what happens with the poor technical writers is the funniest (in a sad kind of way). Tech writers often don't have the political clout to interrupt programmers.

I think it's because so many people don't like to write. Staring at a blank screen is horribly frustrating. Personally, I overcame my fear of writing by taking a class in college that required a 3-5 page essay once a week. Writing is a muscle. The more you write, the more you'll be able to write. If you need to write specs and you can't, start a journal, create a weblog, take a creative writing class, or just write a nice letter to every relative and college roommate you've blown off for the last 4 years.

Painless Functional Specifications - Part 2: What's a Spec?

by Joel Spolsky
Tuesday, October 03, 2000
 This series of articles is about functional specifications, not technical specifications. People get these mixed up. I don't know if there's any standard terminology, but here's what I mean when I use these terms.
  1. A functional specification describes how a product will work entirely from the user's perspective. It doesn't care how the thing is implemented. It talks about features. It specifies screens, menus, dialogs, and so on.
  2. A technical specification describes the internal implementation of the program. It talks about data structures, relational database models, choice of programming languages and tools, algorithms, etc.
 Nongoals. When you're building a product with a team, everybody tends to have their favorite, real or imagined pet features that they just can't live without. If you do them all, it will take infinite time and cost too much money. You have to start culling features right away, and the best way to do this is with a "nongoals" section of the spec. Things we are just not going to do. A nongoal might be a feature you won't have ("no telepathic user interface!") or it might be something more general ("We don't care about performance in this release. The product can be slow, as long as it works. If we have time in version 2, we'll optimize the slow bits.") These nongoals are likely to cause some debate, but it's important to get it out in the open as soon as possible. "Not gonna do it!" as George Sr. puts it.

Side notes. While you're writing a spec, remember your various audiences: programmers, testers, marketing, tech writers, etc. As you write the spec you may think of useful factoids that will be helpful to just one of those groups. For example, I flag messages to the programmer, which usually describe some technical implementation detail, as "Technical Notes". Marketing people ignore those. Programmers devour them. My specs are often chock full of "Testing Notes," "Marketing Notes," and "Documentation Notes."

Specs Need To Stay Alive. Some programming teams adopt a "waterfall" mentality: we will design the program all at once, write a spec, print it, and throw it over the wall at the programmers and go home. All I have to say is: "Ha ha ha ha ha ha ha ha!"
This approach is why specs have such a bad reputation. A lot of people have said to me, "specs are useless, because nobody follows them, they're always out of date, and they never reflect the product."
Excuse me. Maybe your specs are out of date and don't reflect the product. My specs are updated frequently.

Painless Functional Specifications - Part 3: But... How?

by Joel Spolsky
Wednesday, October 04, 2000
 Who writes specs?
Let me give you a little Microsoft history here. When Microsoft started growing seriously in the 1980s, everybody there had read The Mythical Man-Month, one of the classics of software management. (If you haven't read it, I highly recommend it.) The main point of that book was that when you add more programmers to a late project, it gets even later. That's because when you have n programmers on a team, the number of communication paths is n(n-1)/2, which grows at O(n2).

 Charles Simonyi, Microsoft's long time "chief architect", suggested the concept of master programmers. The idea was basically that one master programmer would be responsible for writing all the code, but he or she would rely on a team of junior programmers as "code slaves". The term "Master Programmer" was a bit too medieval, so Microsoft went with "Program Manager." Theoretically, this was supposed to solve the Mythical Man-Month problem, because nobody has to talk to anyone else -- every junior programmer only talks to the one program manager, and so communication grows at O(n) instead of O(n2). A program manager also needs to coordinate marketing, documentation, testing, localization, and all the other annoying details that programmers shouldn't spend time on.

 In my time, the groups at Microsoft with strong program managers had very successful products: Excel, Windows 95, and Access come to mind. But other groups (such as MSN 1.0 and Windows NT 1.0) were run by developers who generally ignored the program managers (who weren't very good anyway, and probably deserved to be ignored), and their products were not as successful.

Here are three things to avoid.
1. Don't promote a coder to be a program manager.  is a classic case of the Peter Principle: people tend to be promoted to their level of incompetence.
 2. Don't let the marketing people be program managers.
 3. Don't have coders report to the program manager.

Painless Functional Specifications - Part 4: Tips

by Joel Spolsky
Sunday, October 15, 2000
 The biggest complaint you'll hear from teams that do write specs is that "nobody reads them." When nobody reads specs, the people who write them tend to get a little bit cynical. It's like the old Dilbert cartoon in which engineers use stacks of 4-inch thick specs to build extensions to their cubicles.
Rule 1: Be Funny Yep, rule number one in tricking people into reading your spec is to make the experience enjoyable. Don't tell me you weren't born funny, I don't buy it.

Every time you need to tell a story about how a feature works, instead of saying:
  • The user types Ctrl+N to create a new Employee table and starts entering the names of the employees.
write something like:
  • Miss Piggy, poking at the keyboard with a eyeliner stick because her chubby little fingers are too fat to press individual keys, types Ctrl+N to create a new Boyfriend table and types in the single record "Kermit."

Rule 2: Writing a spec is like writing code for a brain to execute
 hen you write code, your primary audience is the compiler. Yeah, I know, people have to read code, too, but it's generally very hard for them. For most programmers it's hard enough to get the code into a state where the compiler reads it and correctly interprets it; worrying about making human-readable code is a luxury. Whether you write:
void print_count( FILE* a, char  *  b, int c ){
    fprintf(a, "there are %d %s\n", c, b);}

main(){ int n; n =
10; print_count(stdout, "employees", n) /* code
deliberately obfuscated */ }
printf("there are 10 employees\n");
you get the same output. Which is why, if you think about it, you tend to get programmers who write things like:
Assume a function AddressOf(x) which is defined as the mapping from a user x, to the RFC-822 compliant email address of that user, an ANSI string. Let us assume user A and user B, where A wants to send an email to user B. So user A initiates a new message using any (but not all) of the techniques defined elsewhere, and types AddressOf(B) in the To: editbox.
This could also have been speced as:
Miss Piggy wants to go to lunch, so she starts a new email and types Kermit's address in the "To:" box.

Technical note: the address must be a standard Internet address (RFC-822 compliant.)
They both "mean" the same thing, theoretically, except that the first example is impossible to understand unless you carefully decode it, and the second example is easy to understand. Programmers often try to write specs which look like dense academic papers. They think that a "correct" spec needs to be "technically" correct and then they are off the hook.

 The mistake is that when you write a spec, in addition to being correct, it has to be understandable, which, in programming terms, means that it needs to be written so that the human brain can "compile" it. For humans, you have to provide the big picture and then fill in the details. With computer programs, you start at the top and work your way to the bottom, with full details throughout.

Rule 3: Write as simply as possible
 People use words like "utilize" because they think that "use" looks unprofessional. (There's that word "unprofessional" again. Any time somebody tells you that you shouldn't do something because it's "unprofessional," you know that they've run out of real arguments.)

 Break things down to short sentences. If you're having trouble writing a sentence clearly, break it into two or three shorter sentences. 

Rule 4: Review and reread several times

Rule 5: Templates considered harmful
Avoid the temptation to make a standard template for specs. At first you might just think that it's important that "every spec look the same." Hint: it's not. What difference does it make?

Evidence Based Scheduling

by Joel Spolsky
Friday, October 26, 2007
 Software developers don’t really like to make schedules. Usually, they try to get away without one. “It’ll be done when it’s done!” . Why won’t developers make schedules? Two reasons. One: it’s a pain in the butt. Two: nobody believes the schedule is realistic. 

Over the last year or so at Fog Creek we’ve been developing a system that’s so easy even our grouchiest developers are willing to go along with it. And as far as we can tell, it produces extremely reliable schedules. It’s called Evidence-Based Scheduling, or EBS. You gather evidence, mostly from historical timesheet data, that you feed back into your schedules. What you get is not just one ship date: you get a confidence distribution curve, showing the probability that you will ship on any given date. It looks like this:

The steeper the curve, the more confident you are that the ship date is real.

1) Break ‘er down

When I see a schedule measured in days, or even weeks, I know it’s not going to work. You have to break your schedule into very small tasks that can be measured in hours. Nothing longer than 16 hours.

2) Track elapsed time

It’s hard to get individual estimates exactly right. How do you account for interruptions, unpredictable bugs, status meetings, and the semiannual Windows Tithe Day when you have to reinstall everything from scratch on your main development box? Heck, even without all that stuff, how can you tell exactly how long it’s going to take to implement a given subroutine?
You can’t, really.

3) Simulate the future

In a Monte Carlo simulation, you can create 100 possible scenarios for the future. Each of these possible futures has 1% probability, so you can make a chart of the probability that you will ship by any given date. 


jueves, 15 de diciembre de 2011

Establecer la hora manualmente / forzando de cliente windows en dominio

-- Tomado de

net time \\nombre_de_DC /set /y

Es buena idea hacer antes en el servidor de dominio una actualización de la hora:

w32tm /resync

jueves, 25 de febrero de 2010

Mensajes cuando se hace una llamada usando Ajax

--- tomado de visto en la lista de correo de [Proto-Scripty]

onCreate: function() {
new Effect.Appear('ajax_loader', { duration: 0.3, to: 0.5 });
onComplete: function(request, transport, json) {
if (0 == Ajax.activeRequestCount) {
new Effect.Fade('ajax_loader', { duration: 0.3, from: 0.5 });
if(!request.success()) {
var errorMapping = $H({
400: ['Bad Request', 'The request contains bad syntax or cannot be fulfilled.'],
401: ['Authorization Required', 'You need to authenticate to access this page.'],
403: ['Forbidden', 'The request was a legal request, but the server is refusing to respond to it.'],
404: ['Page Not Found', 'The requested resource could not be found.'],
405: ['Method Not Allowed', 'A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.'],
406: ['Not Acceptable', 'The action you tried to perform on this resource was considered unacceptable.'],
415: ['Unsupported Media Type', 'The media type you are requesting is unsupported.'],
422: ['Unprocessable Entity', 'The request was well-formed but was unable to be followed due to semantic errors.'],
500: ['Application Error', 'An error occurred in the application code. Report sent.'],
503: ['Service not available', 'The webserver did not respond to the request.'],
505: ['HTTP Version Not Supported', 'The requested version is not available on this server.']

var errorMessage = errorMapping.get(transport.status) || ['Unknown Error', 'An error occurred, but could not be determined correctly.'];

if (transport.responseJSON && transport.responseJSON.error)
errorMessage = [transport.responseJSON.error.title, transport.responseJSON.error.message]

var notifyUser = new GrowlNotifier({
title: errorMessage[0],
message: errorMessage[1],
image: "/images/elements/growl_warning.png",
type: 'error'

jueves, 17 de septiembre de 2009

Instalar Messenger Live 2009 (14) msi con una gpo

---tomado de

La opcion de instalar Windows Messenger Live (hasta la version publica al dia 2009-09-15) esta desabilitada al parecer por microsoft, por lo que es necesario editar el archivo messenger.msi para activar la opcion.

Para hacer esto
*se instala windows installer SDK.
*en la carpeta tools de donde se instalo el SDK esta un archivo orca.msi, lo instalamos.
*se creara un acceso directo en el menu de programas, abrimos el programa
*en Orca abrimos el messenger.msi, del lado izquierdo damos con el boton derecho y creamos la tabla AdvtExecuteSequence
*en esa tabla agregamos las siguientes opciones

CostInitialize 800
CostFinalize 1000
InstallValidate 1400
InstallInitialize 1500
CreateShortcuts 4500
RegisterClassInfo 4600
RegisterExtensionInfo 4700
RegisterProgIdInfo 4800
RegisterMIMEInfo 4900
PublishComponents 6200
MsiPublishAssemblies 6250
PublishFeatures 6300
PublishProduct 6400
InstallFinalize 6600
"ProgramMenuFolder.ADEB440D_7847_4F65_80BD_899870ED 2EC9" 1

todas con condition en blanco

grabamos y listo. ya podemos hacer una gpo para instalarlo

Instalar Messenger Live 2009 (14) msi

---Tomado de

Las ultimas versiones (al 2009-09-15) de Windows Messenger Live se instalan con un archivo .exe que descarga todas las herramientas de Windows Live (incluido messenger) y para administradores de sistemas (sysadmin) es mas complicado de esta forma.

Pero podemos obtener los archivos .msi de la carpera c:\archivos de programa\archivos comunes\Windows Live\.cache\ donde se encuentran varias carpetas de nombre aleatorios.

Para poder instalar el messenger se necesita instalar Contacts.msi, dw20shared.msi, crt.msi, Messenger.msi

miércoles, 22 de julio de 2009

Escritorio remoto no inicia servicio

Me paso que no me podia conectar por escritorio remoto, revisando vi que el puerto 3389 no estaba escuchando nada.

Despues de buscar como resolver el problema vi que era por borre la conexion RDP. La borre por ignorante la verdad, antes del problema me conecte remotamente a la maquina y quise cerrar sesión pero se quedo trabado, cerre la ventana aun cuando me advertia que la sesión iba a seguir abierta, pense que en un rato se cerraba. Despues de un rato volvi a abrir sesión y seguia en la misma parte, se congelo algo y nunca se cerro sesión.

Fui a la maquina directamente e intente borrar la conexion, entre configuración de terminal services y borre el RDP-tcp de "connections". ERROR.
Bueno para resolverlo se hace lo siguiente:
-abrir tscc.msc
-seleccionar "connections" y en el menu action darle en "new connection".

las opciones "normales" de una conexion es:
-encriptacion.- compactible con el cliente
-capa de seguridad: Capa RDP
-opciones de inicio: usar las provistas por el cliente, sin "simpre preguntar"
-control remoto: usar control remoto con la configuracion predeterminada}
-permisos:administradores(con control total), local services (permisos espciales de query information y message), network servies (permisos especiales de message) y usuarios de escritorio remoto (con user access y guest access).