I\'ve got the script below
var els = document.getElementsByTagName(\"a\");
for(var i = 0, l = els.length; i < l; i++) {
var el = els[i];
el.innerHTML
Reading and writing the innerHTML property on every element is probably quite expensive and hence causing your slowdown - it forces the browser to "serialize" the element, which you then run through a regexp, and then "deserialize" again. Even worse, you're doing it for every a element, even if it doesn't match.
Instead, try looking directly at the properties of the a element:
var els = document.getElementsByTagName("a");
for (var i = 0, l = els.length; i < l; i++) {
var el = els[i];
if (el.href === 'http://www.example.com/') {
el.innerHTML = "dead link";
el.href = "#";
}
}
EDIT on modern browsers with much greater W3C conformance you can now use document.querySelectorAll() to more efficiently obtain just the links you want:
var els = document.querySelectorAll('a[href^=http://www.example.com/]');
for (var i = 0, l = els.length; i < l; i++) {
els[i].textContent = 'dead link';
els[i].href = '#';
}
This is however not so flexible if there are multiple domain names that you wish to match, or for example if you want to match both http: and https: at the same time.