Category Archives: Security

Myths #3: Give without giving

no giftOne more mystery for me: how give everything without giving everything. This is exactly the question I see very often in various forums and other places. This is the question I hear personally from time to time. It can be in asked in several forms, the most frequent forms are:

1) How can I give a user local admin rights and be sure that they cannot do <put your own stuff here>?

2) How can I restrict my domain admin from accessing the <your very valuable information>?

Naturally, at this point I start boiling and all that stuff, but let’s look at it again.

Well, granting the user administrative rights in a system is going to give them administrative rights: that’s the point. And any administrative access means that the user can do everything. What it cannot do right now, they can grant themselves rights to do. Period.

In first case you can only audit the user’s actions, that’s all, you can do. Moreover, the audit collection and processing must be done on a remote system, which is not accessible (let alone administered) by the user in question. Any other variant, like granting local admin rights, but denying access to some aspects of the system… It just won’t work.

The second case is a bit more complicated, because system we are discussing are usually more distributed. However, even in such occurrence, you can do not much more then in previous one. Again: strict audit with no chances for the admin to tamper with it. The only exclusion for that rule is if you build the system, which, say, encrypts the data and which is not governed by the domain admin. But this is tricky, especially, considering the fact, that the admin can get the data from the computer of the user which decipher the data to work with it (pass-the-hash, or any other attack is possible if he has administrative access to any part of the “secure system”).

Therefore, really, only audit for critical data, including audit of access to backup and restore system.

Any other ideas?

#RuTeched: answering the questions. Does the Dynamic Access Control work over replication?

imageAs I said previously my labs were a success, still I wasn’t able to answer some questions and promised to answer them later. the time has come for the first of them. One of the visitors told me that he had had an experience when some of files’ attributes wouldn’t replicate over DFSR and asked me if there is any problem with DAC in the same situation. I could definitely experiment myself (and I will), but any experiment of mine would just give me an answer: “yes” or “no”. Or “may be” for that matter. It wouldn’t explain why. As I’m not great with the replication, I had to beg for help and, luckily, I knew were to get it: the AskDS blog.

In no time a received the answer. The short one is: “everything will be ok with your files”. The long one I will just cite here:

“Let me clarify some aspects of your question as I answer each part

When enabling Dynamic Access Control on files and folders there are multiple aspects to consider that are stored on the files and folders.

Resource Properties

– Resource Properties are defined in AD and used as a template to stamp additional metadata on a file or folder that can be used during an authorization decision.  That information is stored in an alternate data stream on the file or folder.  This would replicate with the file, the same as the security descriptor

Security Descriptor

The security descriptor replicates with the file or folder.  Therefore, any conditional expression would replicate in the security descriptor.

All of this occurs outside of Dynamic Access Control– it is a result of replicating the file throughout the topology, for example if using DFSR.  Central Access Policy has nothing to do with these results.

Central Access Policy

Central Access Policy is a way to distribute permissions without writing them directly to the DACL of a security descriptor. So, when a Central Access Policy is deployed to a server, the administrator must then link the policy to a folder on the file system.  This linking is accomplish by inserting a special ACE in the auditing portion of the security descriptor informs Windows that the file/folder is protected by a Central Access Policy.  The permissions in the Central Access Policy are then combined with Share and NTFS permissions to create an effective permission.

If the a file/folder is replicated to a server that does not have the Central Access Policy deployed to it then the Central Access Policy is not valid on that server.  The permissions would not apply”.

Thanks, guys. You’re the best Winking smile

Want to learn about cryptography? I know where.


 Take notice: My new feed address is now Please re-subscribe.

Do you have some spare time and want to know how cryptography works? What is the most secure cipher? And why λ is always more than ε… Well, the latter is not true =)
Anyhow, there is a place where you can learn more about cryptography for free. Stanford University provides such a course for free at I’m at the second week now, and already tampered one cipher text and know how decrypted another (it’s not that tricky, but very time consuming).

So welcome to the world of knowledge Winking smile

Myths #2: PKI edition.


Take notice: My new feed address is now Please re-subscribe.


BTW, did you know what do certificate template options like “Allow private key to be exported” or “Prompt the user during enrollment and require user input when the private key is used” really do? Do they make you more secure or not?

Certainly, some people who read my blog do know the answer, others have already guessed the answer: no. They don’t enforce any behavior on a client: it just communicate the requested by CA features.

A good example of it was windows 2003: while you weren’t able export the certificate through GUI you could do this with… some certificates. Furthermore, in Windows 2008 R2 (or Windows 7, as it goes) even some GUI instruments can export such a key. So you cannot restrict your user from exporting and moving the certificate.

Be careful and take care to think if you can trust what you see Winking smile

Trustworthy computing: non-SDL view. Part 2: non-corporate.


Do you think my latest post was about corporate products because only corporate products are subject to not being designed to be secure in deployment? No, consumer ones are built the same way. Say, the famous story about Windows Live Mail and Live Mail’s SSL. Till the recent changes you weren’t able to use both of them. Either you expose your communication without using SSL or you couldn’t use convenient client. I was very glad to receive the ability to use them both.

To sum up: we have excellent products, which aren’t exploitable in the most of the cases through their functions. Still those products don’t have all the necessary abilities to be incorporated into the strict environment. Some things are being changed, some not, but still there is many possibilities to do it before I or any other user discovers the problems in our own network.

I’m glad that Microsoft is on steady way to improve those things, but I want them to do some things prior the RTM. Do you remember any cases, similar to what I described in these two blogs?

Trustworthy computing: non-SDL view


Take notice: My new feed address is now Please re-subscribe.


Well, finally it is my time to scold Microsoft. I’m not a fun of this type of self-promotion, still I believe that the only way to move forward is to receive, process and answer some constructive criticism. So let’s begin:
Several years ago Microsoft announced its widely-known Trustworthy Computing initiative (actually they just celebrated its 10 years). I probably don’t have to remind you the goals and means for the initiative to you, they all can be found without any problems. Anyway, this letter doesn’t pretend to be some kind of thorough analysis after which I will exclaim “MS lies!” On the contrary, it is more about just trying to show that in my humble opinion something in current approach to security can be improved.
I am an IT Pro with 10+ years of experience, and this fact definitely affects how I see the World, security and Microsoft’s products regarding both of them. My recent impression of Trustworthy Computing is like that:
“SDL! SDL this! SDL that! SDL is everything and everywhere!”
Don’t get me wrong, SDL is great even from the perspective of a systems administrator who almost cannot write code. Seriously, I have the feeling that Microsoft’s code itself has become much more secure over the past years. Most of the recent vulnerabilities need me to turn off some safeguards (like DEP or UAC) or to not configure any of them in extremely hazardous environment (not turning off Server service on an Internet-facing computer). As a consequence I feel much safer than, say, 10 years ago with the products I use. Still there are some features in recent situation development that make me believe that the current SDL lacks something vital. One may ask “what exactly do you mean?” Well, it is testing in the environments, which are built according security best practices and creating not only the code which is not vulnerable, but also which provides features to implement the controls recommended by the best practices and can deliver this features without failing. Everything, literally everything starting with smart card authentication and finishing with separation of duties or delegation of access has to be incorporated into the products to build somewhat secure environment. You cannot feel secure if those who make your backups are able to restore them and configure the way they are being created, or if you have to give SQL farm administrator permissions to someone who is to make some basic job. During past several years I have been witnessing some events which made me think that those matters haven’t been in focus for some PGs at least for several years if not at all. To be not accused of making this up I’ll give you some examples from my own experience and observations.
1) When MS SharePoint Server 2007 was just released, we tried to install it in the company I worked for. Our policies required using of Constrained Kerberos Delegation, publishing of any web application through ISA server SSL bridging and all that stuff including smart card authentication. Sound requirements, aren’t they? Unfortunately, the product obviously wasn’t tested with such constraints. We stepped into multiple problems, which were solved throughout the flow of several MS Support cases. Fortunately all of them were a success. At the very least we received workarounds For example, indexing didn’t work on SSL site, and if you first created SSL site on port 443 and then extended it to the 80th port (which was to be crawled by MOSS), then indexing worked fine, but search didn’t return result. The correct sequence was to install site on the 80th port and then extend it to the 443rd. Not a big deal, one may say, but this could be detected by automatic testing in the relevant environment (BTW, this behavior was told to be in place by-design and was fixed in the following SPs 😉 ).
2) The second case which is relatively close to the SharePoint is from the people who created WebDAV. The technology is very useful, though it was again, never tested in a secure environment. Publish it through the ISA Server, require users to use their smart cards to get access to the WebDAV resource and… voila! There are your problems.
3) Smart card support really seems to be the weak point for the developers. We absolutely love to use UC products of Microsoft: Exchange and OCS/Lync. But can you use Outlook and Communicator to authenticate by certificate? Hell, no! Build a VPN channel (or DA), and then use it if you want secure communications.
4) Data Protection Manager. It is our beloved one. Being as simple yet powerful as it is, it is just charming. Still, three major releases later we didn’t have any duty separation. If I am a local administrator I can backup, restore and configure everything. If I am not a local administrator, I can almost nothing. There are some valuable exceptions, but not all we need. The latest release has RBAC in it as it was promised by PG, still, 5 years without it sucked.
5) A problem with the SQL server. In order to receive highly available solution some can use SQL Server Mirroring technology. It is great and has really saved our applications many times. But when we stepped over the boundary where we had to implement RBAC for administrative tasks we run into the following problem. Running ALTER DATABASE for any database which is in the recovery mode while having permissions lesser then administrator causes crashing of the process and dumping it into the file by default. The operation described above is very often used with a mirrored database, for example to mirror it. Again the bug was admitted but we were proposed using the administrator’s permission for the job as a workaround. The bug will be fixed in the next release they said. This bug can be costly, at least it is for us (BTW, technically it can cause DOS for the SQL server as dumps can be very large and be created very fast)
All the bugs above could have been found by testing against the environment built in accordance with the security best practices. Those features which are just absent (not bugs) could be introduced much earlier if someone really thought of secure deployment for them. Unfortunately all the examples above show that the job hasn’t been done. I would like to think that those are only individual mistakes, but if only one man (me) ran into so many of them, then I am afraid they are just the consequence of the lack of integrity in the approach of PGs to the trustworthy computing.

MS SIR #12

like_a_sir Okay, better late than never. I finally got to the latest Microsoft Security Intelligence Report. While usually there is not much unexpected this time I was almost shocked with the first section of the document. And I believe it’s excusable, because it is named…

How Conflicker CONTINUES to propagate.

Conflicker! The three-years-old malware! CONTINUES to be a THREAT! Are we going nuts? =)

60% of people who could have got it (if not for antivirus) have weak admin’s passwords. Also 17 to 42% (XP only) have the vulnerability which is used by the worm. Three years after the patch was issued…

This is crazy word, guys =)

Everything else in the report is not half as thrilling as this:


1) HTML/JavaScript exploits are on the rise

2) It seems like document exploiting steadily grows too. Probably sooner or later we’ll see some book reader exploited Winking smile

3) SPAM seems to decline in quantity (at least in this report =) ). What become a surprise for me is the fact that the #1 contributor to the spam flow were emails with content advertising non-sexual pharmacy. Probably I wasn’t interested in the section while reading previous reports. Still it’s very refreshing to find that health is more reliable way to earn money than “enlarging someone’s manhood” =)

4) No surprise in the fact that most successful malware needs user action to be installed. But Conflicker is #6… Like I said – shocking discovery =(

%SystemRoot%system32 secrets: cipher

Next command in my list is what you never remember about unless user comes in with a cry: “I’ve reset my password and now all my EFS-encrypted files are gone!!!”. Are you familiar with the situation? I am not, fortunately, but I heard some related horror stories. Backup the encryption keys is the key. And updating of keys on the files. And creating of recovery keys. And backing up the encryption keys. All that the utility in the question can do for you.

There are plenty of articles about the actions described above. But when I tried to look at the utility’s description more closely, I found one new function: cipher with arguments “/W” and a folder will remove all data from unused disk space on the volume where the folder is placed. What it is doing is:

1) Creating folder EFSTMPWP on the volume:

2) Creating there a temp file (or several, according to some sources)


3) Writing there zeros, then ones, and polishes it with some random values:


It does each step until the whole disk is filled up and then repeats:




Of course it is quite time consuming, especially on large volumes. But if I was the person to design the command, I’d rather made it to write not just zeros and ones, but just encrypt every free cluster with a random key. Luckily it wasn’t me, so it is not even more long procedure 😉

The command asks you to close all the applications to make the effort as effective as it is possible, mostly to eliminate all the temp files with data in them.

Further reading:

cipher /?

Delegate permissions for creating GPO objects in other domain

imageThe task is obviously necessary to complete on your way to implementing Role-Based Administration concept. And, to be honest, being in euphoria after quick acquaintance with AGPM I thought that it was no deal at all: give an account or a group a membership in some special groups including “Group Policy Creator Owners” and voila – you’ve got it. Aha. Like hell it can succeed! =) This darn group is global and thus cannot be populated with objects from other domains. And moreover, you are unable to change the fact: everything is dimmed.


At least I don’t know a way to change the group’s scope (but I noted to myself to find out everything about it). So we won’t get this easy way. Will we retreat? No way. If we can’t add our object to the group, we can create other group and grant permission to the group directly. What permissions does have “Group Policy Creator Owners” group? As far as I know to create any GPO we need permissions in two places: Policies container in AD and Policies folder in sysvol. So let us delegate the permissions for the brand-new group “Role GP Creator Owners”:

1) in AD on Domain/System/Policies container:





I guess, “Create All Child Objects” is a bit overkill, and we can do better (just a guess), but the “Group Policy Creator Owners” group has these permissions, so we won’t do it worse.

2) now on a Policies folder:



That’ll do the job for us. At least at did for me, but still, I recommend to check it with support if you have it. I’ll definitely do that and fix the article if it needs it.

Wildcard certificates drawbacks

imageThat’s one of the referrers from search systems which leads users to my blog. Ok, there certainly are drawbacks, so why not? But first things first: what are those wildcard certificates?

In order to protect communications with some web-services or web sites (not only them, actually) we use SSL certificates. I have to say, that it doesn’t, actually, mean that every site with https prefix and valid certificate is valid itself or communications with it are protected, but that’s not for today’s discussion. Anyway, SSL certificates are somewhat brilliant and somewhat ugly, but they are our reality for now and since you have many web-sites, or medium to large infrastructure which uses multitude of services protected by certificates… That’s because a certificate is issued for one particular domain name, that is, and should all have different certificates for their protection. Well, in such situation you are usually bound to manage dozens and hundreds of certificates which expire, need to be renewed, need to be monitored… The hell of a job. But “security is security” and all that stuff, so you are to do it all.

But the world wouldn’t see many inventions if not for lazy people, you know. So those lazy invented wildcard certificates. Those are issued to names like * and hence can be used on any of’s subdomains. That solves all the problems above. Or does it? Indeed it does. But nothing comes without cost, remember? This case is not an exception from the rule. If you use a wildcard certificate in your organization you have one secret key on every your box which needs a certificate (there are services which allow you to create multiple wildcard certificates with one name, but different secret keys… But then why even bother to do that?). Statistically that does mean that you are increasing the risk of compromising this one particular certificate. Some don’t think this is a problem, well. Why bother with certificates at all, then? =) Suppose, we are not those guys and we think that compromising of the certificate for several dozen services is a problem but not very big one: we’ll just need to get one new and spread it over our sites. But, since we had this certificate compromised, we can consider any of our services as a such as well. So we need also audit these systems to find out if they are ok. And, believe me, it is very expensive.

So, security will definitely be harmed (at least statistically). But that’s not over: you may (or may not) get many other problems.

To sum up: I’m definitely not fond of the solution. At least for now. you may and should decide on your own. =)