I'm trying to set up a large scale email system that supports 100,000+ IMAP accounts. We have an existing frontend web interface that does a lookup on a mysql db to figure out which IMAP server to connect to for each user. For the email infrastructure we have decided on Postfix and Cyrus. We have configured both to use mysql to get the virtual user information.
Because of the way that the infrastructure is (biz reasons) we are not doing shared storage, we have numerous IMAP servers that we distribute accounts across. As we add more users, we image up a new IMAP server. For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
Anyone have a working example that they could share? It would be greatly appreciated.
thanks -matt
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
Anyone have a working example that they could share? It would be greatly appreciated.
Sorry, never did lmtp but if I read your post properly, you want to do a transport map lookup for each mailbox to get the correct lmtp entry.
I suggest using cdb for your transport map database and rebuilding it say every eight hours. cdb offers fast lookup and database building. You can store the entries in mysql and dump to a file for cdb creation. I do not suggest mysql for transport even if you are using mysql connection pooling and transport tables get called in a lot of places.
Matt Shields wrote:
I'm trying to set up a large scale email system that supports 100,000+ IMAP accounts. We have an existing frontend web interface that does a lookup on a mysql db to figure out which IMAP server to connect to for each user. For the email infrastructure we have decided on Postfix and Cyrus. We have configured both to use mysql to get the virtual user information.
Because of the way that the infrastructure is (biz reasons) we are not doing shared storage, we have numerous IMAP servers that we distribute accounts across. As we add more users, we image up a new IMAP server. For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
Anyone have a working example that they could share? It would be greatly appreciated.
http://www.postfix.org/MYSQL_README.html
Then you can create a view out of your existing data schema to fit the postfix needed schema.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Matt Shields wrote:
I'm trying to set up a large scale email system that supports 100,000+ IMAP accounts. We have an existing frontend web interface that does a lookup on a mysql db to figure out which IMAP server to connect to for each user. For the email infrastructure we have decided on Postfix and Cyrus. We have configured both to use mysql to get the virtual user information.
Because of the way that the infrastructure is (biz reasons) we are not doing shared storage, we have numerous IMAP servers that we distribute accounts across. As we add more users, we image up a new IMAP server. For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
There are primarily two ways:
[virtual aliase] you can use virtual_alias_maps to redirect foo@example.com to foo@hostN.example.com, provided the final server accepts such addresses.
If the final server doesn't accept these, and you use smtp to relay to, then you can write the addresses back, using smtp_generic_maps.
[transport] an laternative is to use use (per-user) transport_maps. something like
foo@example.com relay:[hostN.example.com]
In bothe approaches, the mappings can be generated using sql statements (mostly CONCAT). something like ... query = SELECT concat('relay:[', host, '.example.com]') FROM User where '%u' = user and '%d' = domain
you get the idea I hope.
Anyone have a working example that they could share? It would be greatly appreciated.
thanks -matt _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
mouss wrote:
Matt Shields wrote:
I'm trying to set up a large scale email system that supports 100,000+ IMAP accounts. We have an existing frontend web interface that does a lookup on a mysql db to figure out which IMAP server to connect to for each user. For the email infrastructure we have decided on Postfix and Cyrus. We have configured both to use mysql to get the virtual user information.
Because of the way that the infrastructure is (biz reasons) we are not doing shared storage, we have numerous IMAP servers that we distribute accounts across. As we add more users, we image up a new IMAP server. For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
There are primarily two ways:
[virtual aliase] you can use virtual_alias_maps to redirect foo@example.com to foo@hostN.example.com, provided the final server accepts such addresses.
If the final server doesn't accept these, and you use smtp to relay to, then you can write the addresses back, using smtp_generic_maps.
[transport] an laternative is to use use (per-user) transport_maps. something like
foo@example.com relay:[hostN.example.com]
In bothe approaches, the mappings can be generated using sql statements (mostly CONCAT). something like ... query = SELECT concat('relay:[', host, '.example.com]') FROM User where '%u' = user and '%d' = domain
you get the idea I hope.
just to add that the virtual aliases way is to be preferred. transport_maps is a "latency sensitive" map, so it is better not to use an rdbms for that.
mouss wrote:
mouss wrote:
Matt Shields wrote:
I'm trying to set up a large scale email system that
supports 100,000+
IMAP accounts. We have an existing frontend web interface
that does a
lookup on a mysql db to figure out which IMAP server to
connect to for
each user. For the email infrastructure we have decided on Postfix and Cyrus. We have configured both to use mysql to get the virtual user information.
Because of the way that the infrastructure is (biz
reasons) we are not
doing shared storage, we have numerous IMAP servers that
we distribute
accounts across. As we add more users, we image up a new
IMAP server.
For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
There are primarily two ways:
[virtual aliase] you can use virtual_alias_maps to redirect foo@example.com to foo@hostN.example.com, provided the final server accepts
such addresses.
If the final server doesn't accept these, and you use smtp
to relay to,
then you can write the addresses back, using smtp_generic_maps.
[transport] an laternative is to use use (per-user) transport_maps.
something like
foo@example.com relay:[hostN.example.com]
In bothe approaches, the mappings can be generated using
sql statements
(mostly CONCAT). something like ... query = SELECT concat('relay:[', host, '.example.com]') FROM User where '%u' = user and '%d' = domain
you get the idea I hope.
True, it may be better to just have a cron job dump out new static maps every 15 minutes or so then to have the MTA query on every delivery especially for 100K accounts.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
Ross S. W. Walker wrote:
mouss wrote:
mouss wrote:
Matt Shields wrote:
I'm trying to set up a large scale email system that
supports 100,000+
IMAP accounts. We have an existing frontend web interface
that does a
lookup on a mysql db to figure out which IMAP server to
connect to for
each user. For the email infrastructure we have decided on Postfix and Cyrus. We have configured both to use mysql to get the virtual user information.
Because of the way that the infrastructure is (biz
reasons) we are not
doing shared storage, we have numerous IMAP servers that
we distribute
accounts across. As we add more users, we image up a new
IMAP server.
For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
There are primarily two ways:
[virtual aliase] you can use virtual_alias_maps to redirect foo@example.com to foo@hostN.example.com, provided the final server accepts
such addresses.
If the final server doesn't accept these, and you use smtp
to relay to,
then you can write the addresses back, using smtp_generic_maps.
[transport] an laternative is to use use (per-user) transport_maps.
something like
foo@example.com relay:[hostN.example.com]
In bothe approaches, the mappings can be generated using
sql statements
(mostly CONCAT). something like ... query = SELECT concat('relay:[', host, '.example.com]') FROM User where '%u' = user and '%d' = domain
you get the idea I hope.
True, it may be better to just have a cron job dump out new static maps every 15 minutes or so then to have the MTA query on every delivery especially for 100K accounts.
indeed. and if the table has a "status" field so that the script can download only new or modified entries, then the dump can be made faster. now, a trigger may even be better so that the dump script doesn't need to query the full table, but only a small table of new/modified entries generated by the trigger.
Data changes too frequently to generate the file every x number of minutes across all smtp servers.
The mysql db isn't a single server. It's a master (read/write) with multiple replicas for read access. Those replicas are load balanced with LVS (heartbeat/ldirectord/ipvsadm). The postfix(smtp) incoming and outgoing servers are also load balanced with LVS. So database read speed is not an issue. Believe me, we know how to build large high traffic sites, the only problem we're having is the exact syntax on using transport_maps or virtual_transport with multiple lmtp transports, and I think I got that figured out with the transport_maps. Will post more later.
-matt
On 10/23/07, mouss mlist.only@free.fr wrote:
Ross S. W. Walker wrote:
mouss wrote:
mouss wrote:
Matt Shields wrote:
I'm trying to set up a large scale email system that
supports 100,000+
IMAP accounts. We have an existing frontend web interface
that does a
lookup on a mysql db to figure out which IMAP server to
connect to for
each user. For the email infrastructure we have decided on Postfix and Cyrus. We have configured both to use mysql to get the virtual user information.
Because of the way that the infrastructure is (biz
reasons) we are not
doing shared storage, we have numerous IMAP servers that
we distribute
accounts across. As we add more users, we image up a new
IMAP server.
For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
There are primarily two ways:
[virtual aliase] you can use virtual_alias_maps to redirect foo@example.com to foo@hostN.example.com, provided the final server accepts
such addresses.
If the final server doesn't accept these, and you use smtp
to relay to,
then you can write the addresses back, using smtp_generic_maps.
[transport] an laternative is to use use (per-user) transport_maps.
something like
foo@example.com relay:[hostN.example.com]
In bothe approaches, the mappings can be generated using
sql statements
(mostly CONCAT). something like ... query = SELECT concat('relay:[', host, '.example.com]') FROM User where '%u' = user and '%d' = domain
you get the idea I hope.
True, it may be better to just have a cron job dump out new static maps every 15 minutes or so then to have the MTA query on every delivery especially for 100K accounts.
indeed. and if the table has a "status" field so that the script can download only new or modified entries, then the dump can be made faster. now, a trigger may even be better so that the dump script doesn't need to query the full table, but only a small table of new/modified entries generated by the trigger.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Matt Shields wrote:
Data changes too frequently to generate the file every x number of minutes across all smtp servers.
The mysql db isn't a single server. It's a master (read/write) with multiple replicas for read access. Those replicas are load balanced with LVS (heartbeat/ldirectord/ipvsadm). The postfix(smtp) incoming and outgoing servers are also load balanced with LVS. So database read speed is not an issue. Believe me, we know how to build large high traffic sites, the only problem we're having is the exact syntax on using transport_maps or virtual_transport with multiple lmtp transports, and I think I got that figured out with the transport_maps. Will post more later.
the syntax is simple, but depends on the structure of your tables.
transport_maps = ... proxy:mysql:/etc/postfix/maps/mysql/transport ...
# cat /etc/postfix/maps/mysql/transport hosts = 192.0.2.33 ... user = youruser password = yourpassword dbname = yourdbname query = select concat('lmtp:', host) from yourtable where mailbox = '%s'
The above assumes a simple {`mailbox`, `host`} structure. you'll need to adjust the sql query to your table structure.
Matt Shields wrote:
Data changes too frequently to generate the file every x number of minutes across all smtp servers.
You have to support instantly deliverable mailboxes for new accounts?
The mysql db isn't a single server. It's a master (read/write) with multiple replicas for read access. Those replicas are load balanced with LVS (heartbeat/ldirectord/ipvsadm). The postfix(smtp) incoming and outgoing servers are also load balanced with LVS. So database read speed is not an issue. Believe me, we know how to build large high traffic sites, the only problem we're having is the exact syntax on using transport_maps or virtual_transport with multiple lmtp transports, and I think I got that figured out with the transport_maps. Will post more later.
I assume that you are aware that transport_maps is called multiple times.
Recipient_maps in rdbms tables generate at least two lookups (one for smtpd, one for cleanup) but when you add transport_maps, that will at least explode to one per subdomain of the sender address (you can mitigate a lot of that with the domain setting in the map configuration file) as trivial-rewrite tries to build its triples for addresses.
On 10/23/07, Christopher Chan christopher@ias.com.hk wrote:
Matt Shields wrote:
Data changes too frequently to generate the file every x number of minutes across all smtp servers.
You have to support instantly deliverable mailboxes for new accounts?
Yes, don't ask me why, it's a business thing.
The mysql db isn't a single server. It's a master (read/write) with multiple replicas for read access. Those replicas are load balanced with LVS (heartbeat/ldirectord/ipvsadm). The postfix(smtp) incoming and outgoing servers are also load balanced with LVS. So database read speed is not an issue. Believe me, we know how to build large high traffic sites, the only problem we're having is the exact syntax on using transport_maps or virtual_transport with multiple lmtp transports, and I think I got that figured out with the transport_maps. Will post more later.
I assume that you are aware that transport_maps is called multiple times.
Recipient_maps in rdbms tables generate at least two lookups (one for smtpd, one for cleanup) but when you add transport_maps, that will at least explode to one per subdomain of the sender address (you can mitigate a lot of that with the domain setting in the map configuration file) as trivial-rewrite tries to build its triples for addresses. _______________________________________________
Yes, we're aware, that why we have mysql setup with multiple incoming and outgoing smtp servers that read from a large cluster of replicated mysql servers (read-only).
Not saying we won't look at creating a cron to dump maps to a local file, we might do that in the future, but just for right now we have enough horsepower to deal with what we have.
-matt
On 10/23/07, mouss mlist.only@free.fr wrote:
There are primarily two ways:
[virtual aliase] you can use virtual_alias_maps to redirect foo@example.com to foo@hostN.example.com, provided the final server accepts such addresses.
If the final server doesn't accept these, and you use smtp to relay to, then you can write the addresses back, using smtp_generic_maps.
[transport] an laternative is to use use (per-user) transport_maps. something like
foo@example.com relay:[hostN.example.com]
In bothe approaches, the mappings can be generated using sql statements (mostly CONCAT). something like ... query = SELECT concat('relay:[', host, '.example.com]') FROM User where '%u' = user and '%d' = domain
you get the idea I hope.
Anyone have a working example that they could share? It would be greatly appreciated.
Forward's aren't acceptable. There is a way to do it with the transport function and lmtp on a account by account basis. I'm looking for real world configs from someone that has this working.
-matt
On Oct 23, 2007, at 12:28 PM, Matt Shields wrote:
Forward's aren't acceptable. There is a way to do it with the transport function and lmtp on a account by account basis. I'm looking for real world configs from someone that has this working.
Not condoning, but providing some links: http://middleware.internet2.edu/dir/docs/ldap-recipe.htm#E-MailRouting http://www.postfix.org/LDAP_README.html#example_virtual
The transport function will tell you how to deliver to a particular server, but I'm not sure you are going to get the kind of efficiency you probably want thinking of the user account to server mapping as part of the transport functions, though suggestions have been made that will meet that way of thinking.
Regardless what method you use to generate the maps, be it mysql, ldap or flat file, you will want the maps available to each edge host on the box themselves, so either storing copies of the flat files, a local copy of the mysql database or a local a local directory (none of them being the masters, more functioning like caching only name servers.) I'm partial to flat files for smaller maps and LDAP for larger ones, but there are arguments all the way around, some of which depend on local admin familiarity with whichever tech.
Forward's aren't acceptable. There is a way to do it with the transport function and lmtp on a account by account basis. I'm looking for real world configs from someone that has this working.
Depending on how you define forwards, it is not going to be possible for you to not have forwards, unless you have a large number of domains pointing directly at your delivery point servers and have only a certain number of domains per individual server.
--Chris
Matt Shields wrote:
On 10/23/07, mouss mlist.only@free.fr wrote:
There are primarily two ways:
[virtual aliase] you can use virtual_alias_maps to redirect foo@example.com to foo@hostN.example.com, provided the final server accepts such addresses.
If the final server doesn't accept these, and you use smtp to relay to, then you can write the addresses back, using smtp_generic_maps.
[transport] an laternative is to use use (per-user) transport_maps. something like
foo@example.com relay:[hostN.example.com]
In bothe approaches, the mappings can be generated using sql statements (mostly CONCAT). something like ... query = SELECT concat('relay:[', host, '.example.com]') FROM User where '%u' = user and '%d' = domain
you get the idea I hope.
Anyone have a working example that they could share? It would be greatly appreciated.
Forward's aren't acceptable.
That's why I said to use smtp_generic_maps. This way, the "forward" is internal. This is more efficient that transport_maps.
There is a way to do it with the transport function and lmtp on a account by account basis. I'm looking for real world configs from someone that has this working.
sorry, I don't. anyway, I see you got about the same answer from Wietse and Viktor on the postfix list.
Matt Shields wrote:
Because of the way that the infrastructure is (biz reasons) we are not doing shared storage, we have numerous IMAP servers that we distribute accounts across. As we add more users, we image up a new IMAP server. For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
Having no idea how that fits into your already existing infrastructure, but the Cyrus IMAPD Aggregator (also known as Cyrus IMAPD Murder) looks like something which should be evaluated - you probably can even drop the mysql database, as it really doesn't matter to which of the lmtp/imapd proxies you connect to.
More information:
http://cyrusimap.web.cmu.edu/ag.html
Cheers,
Ralph
On 10/24/07, Ralph Angenendt ra+centos@br-online.de wrote:
Matt Shields wrote:
Because of the way that the infrastructure is (biz reasons) we are not doing shared storage, we have numerous IMAP servers that we distribute accounts across. As we add more users, we image up a new IMAP server. For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
Having no idea how that fits into your already existing infrastructure, but the Cyrus IMAPD Aggregator (also known as Cyrus IMAPD Murder) looks like something which should be evaluated - you probably can even drop the mysql database, as it really doesn't matter to which of the lmtp/imapd proxies you connect to.
From what I understand about Cyrus Murder, it is for replicating your
user data across multiple servers, which is good if you want to load balance multiple IMAP servers and you don't have a shared storage backend.
As mentioned we have a web frontend that checks mysql when the user logs in to see which imap server the account resides on.
Everyone, I have figured it out. I do plan on posting after I finish documenting the steps (for those interested).
-matt
Matt Shields wrote:
I'm trying to set up a large scale email system that supports 100,000+ IMAP accounts. We have an existing frontend web interface that does a lookup on a mysql db to figure out which IMAP server to connect to for each user. For the email infrastructure we have decided on Postfix and Cyrus. We have configured both to use mysql to get the virtual user information.
Because of the way that the infrastructure is (biz reasons) we are not doing shared storage, we have numerous IMAP servers that we distribute accounts across. As we add more users, we image up a new IMAP server. For our business's scaling purposes this was the best plan.
What I am having a problem is how do I get postfix to transfer the email to the particular IMAP server that the user account is on. I know that I need to use lmtp and transport, but all the examples I have seen show forwarding all email to 1 IMAP server. I would like Postfix to do a lookup for each mailbox and determine which IMAP server to deliver it to.
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
There are at least 2 free ldap servers. Or if you are stuck with mysql you can probably add your own field for delivery host.
Does anyone have enough faith in a free NFS server to use it in this scenaro these days? How about opensolaris on top of zfs?
Les Mikesell wrote:
Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
There are at least 2 free ldap servers. Or if you are stuck with mysql you can probably add your own field for delivery host.
The service provider I used to work for tried openldap in 98. They got burned big time. Maybe it is up to the task today. What kind of hardware, though, would you use for one that the OP indicates will get a lot of writes? Everything I have read says LDAP is not for high write problems.
Does anyone have enough faith in a free NFS server to use it in this scenaro these days? How about opensolaris on top of zfs?
I would say. No comment on opensolaris in this scenario but I am happy with zfs as an offsite online backup solution.
Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
There are at least 2 free ldap servers. Or if you are stuck with mysql you can probably add your own field for delivery host.
The service provider I used to work for tried openldap in 98. They got burned big time. Maybe it is up to the task today. What kind of hardware, though, would you use for one that the OP indicates will get a lot of writes? Everything I have read says LDAP is not for high write problems.
1998 was a long time ago. Red Hat (fedora) directory server has claimed good performce for several years now. http://directory.fedoraproject.org/
But the openldap guys think they are better - see page 33 of the pdf linked from this page: http://www.mail-archive.com/ldap@umich.edu/msg01151.html (22000 queries/sec, 4800 updates/sec on a terabyte database with 150 million entries - but I think the test box had 480Gigs of RAM...)
Does anyone have enough faith in a free NFS server to use it in this scenaro these days? How about opensolaris on top of zfs?
I would say. No comment on opensolaris in this scenario but I am happy with zfs as an offsite online backup solution.
Are you using the incremental send/receive operation for this?
Les Mikesell wrote:
Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
There are at least 2 free ldap servers. Or if you are stuck with mysql you can probably add your own field for delivery host.
The service provider I used to work for tried openldap in 98. They got burned big time. Maybe it is up to the task today. What kind of hardware, though, would you use for one that the OP indicates will get a lot of writes? Everything I have read says LDAP is not for high write problems.
1998 was a long time ago. Red Hat (fedora) directory server has claimed good performce for several years now. http://directory.fedoraproject.org/
Yeah, well, I guess the Fedora Directory server is unlikely to drop its entire datastore and will actually keep running but hey, are you going to migrate back to ldap if you have a system that is distributed across different mysql boxes running on cheap boxes and does its job?
But the openldap guys think they are better - see page 33 of the pdf linked from this page: http://www.mail-archive.com/ldap@umich.edu/msg01151.html (22000 queries/sec, 4800 updates/sec on a terabyte database with 150 million entries - but I think the test box had 480Gigs of RAM...)
There you go. If you have the hardware, you can do openldap. 480Gigs? Did you add an extra zero?
Does anyone have enough faith in a free NFS server to use it in this scenaro these days? How about opensolaris on top of zfs?
I would say. No comment on opensolaris in this scenario but I am happy with zfs as an offsite online backup solution.
Are you using the incremental send/receive operation for this?
Huh? This is just rsync for the vpopmail maildir, user home directories, pervasive database files and scp for an Exchange backup file and then snapshotting on the zfs volume for the vpopmail and user home directories. Nothing heavy. What is this incremental send/receive operation that you are talking about?
On Thu, 2007-10-25 at 09:58 +0800, Christopher Chan wrote:
Les Mikesell wrote:
Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
There are at least 2 free ldap servers. Or if you are stuck with mysql you can probably add your own field for delivery host.
The service provider I used to work for tried openldap in 98. They got burned big time. Maybe it is up to the task today. What kind of hardware, though, would you use for one that the OP indicates will get a lot of writes? Everything I have read says LDAP is not for high write problems.
1998 was a long time ago. Red Hat (fedora) directory server has claimed good performce for several years now. http://directory.fedoraproject.org/
Yeah, well, I guess the Fedora Directory server is unlikely to drop its entire datastore and will actually keep running but hey, are you going to migrate back to ldap if you have a system that is distributed across different mysql boxes running on cheap boxes and does its job?
---- what I can't figure out is why you are asking questions when you have already decided answers...in part based on experiences from 10 years ago.
Craig
Craig White wrote:
On Thu, 2007-10-25 at 09:58 +0800, Christopher Chan wrote:
Les Mikesell wrote:
Christopher Chan wrote:
> I thought the usual ways of doing this were to either use a > high-performance NFS server (netapp filer...) and maildir format so > you can run imap from any client facing server, or to keep the > delivery host information in an LDAP attribute that you find when > validating the address. > This is the 'I have the money' way of doing this ;-)
There are at least 2 free ldap servers. Or if you are stuck with mysql you can probably add your own field for delivery host.
The service provider I used to work for tried openldap in 98. They got burned big time. Maybe it is up to the task today. What kind of hardware, though, would you use for one that the OP indicates will get a lot of writes? Everything I have read says LDAP is not for high write problems.
1998 was a long time ago. Red Hat (fedora) directory server has claimed good performce for several years now. http://directory.fedoraproject.org/
Yeah, well, I guess the Fedora Directory server is unlikely to drop its entire datastore and will actually keep running but hey, are you going to migrate back to ldap if you have a system that is distributed across different mysql boxes running on cheap boxes and does its job?
what I can't figure out is why you are asking questions when you have already decided answers...in part based on experiences from 10 years ago.
Well, I do not work for that service provider anymore...I was just putting forth the question they would probably ask...
In any case, the money for hardware stands I believe unless Fedora Directory/OpenLDAP has really good performance in a heavy read/write environment versus mysql.
On Thu, 2007-10-25 at 10:30 +0800, Christopher Chan wrote:
Craig White wrote:
On Thu, 2007-10-25 at 09:58 +0800, Christopher Chan wrote:
Les Mikesell wrote:
Christopher Chan wrote:
>> I thought the usual ways of doing this were to either use a >> high-performance NFS server (netapp filer...) and maildir format so >> you can run imap from any client facing server, or to keep the >> delivery host information in an LDAP attribute that you find when >> validating the address. >> > This is the 'I have the money' way of doing this ;-) There are at least 2 free ldap servers. Or if you are stuck with mysql you can probably add your own field for delivery host.
The service provider I used to work for tried openldap in 98. They got burned big time. Maybe it is up to the task today. What kind of hardware, though, would you use for one that the OP indicates will get a lot of writes? Everything I have read says LDAP is not for high write problems.
1998 was a long time ago. Red Hat (fedora) directory server has claimed good performce for several years now. http://directory.fedoraproject.org/
Yeah, well, I guess the Fedora Directory server is unlikely to drop its entire datastore and will actually keep running but hey, are you going to migrate back to ldap if you have a system that is distributed across different mysql boxes running on cheap boxes and does its job?
what I can't figure out is why you are asking questions when you have already decided answers...in part based on experiences from 10 years ago.
Well, I do not work for that service provider anymore...I was just putting forth the question they would probably ask...
In any case, the money for hardware stands I believe unless Fedora Directory/OpenLDAP has really good performance in a heavy read/write environment versus mysql.
---- Heck, I see lots of circles where they wouldn't trust mysql for an enterprise application so it seems clear that you are not talking about stability or performance but rather familiarity and the amount of trust you have in what you know.
I would expect openldap to blow the doors off a mysql db but what do I know? I deal in circles < 100 user accounts (small businesses).
Craig
Heck, I see lots of circles where they wouldn't trust mysql for an enterprise application so it seems clear that you are not talking about stability or performance but rather familiarity and the amount of trust you have in what you know.
I would expect openldap to blow the doors off a mysql db but what do I know? I deal in circles < 100 user accounts (small businesses).
Wow it's amazing how off topic and how many opinions you get on a mailing list, when all you wanted to know was how do I specially do this or that. That's why I stated what my environment was.
But, since numerous people have stated how mysql is inadequate to do what we want to do or in general for any task. We currently use mysql in a replicated environment with LVS to balance the connections for our main websites that is all dynamic. Last time I checked we were sustaining thousands of visitors per second 24 hours a day, which equaled about 3-4 thousand queries per second.
So, if it can handle that load and Google trusts it in their infrastructure, then I'm not gonna replace it. It does what I need, it's reliable, it's fast and it has proven that it scales well.
I think the main problem when people say you shouldn't use this product or that product because it's not good enough is they haven't set it up properly. They haven't taken the time to tune the server, the daemon, and the application. Let's face it anyone can write a query to a database (like "select * from table") and if you put enough load behind it your performance is gonna suck no matter what your app or database is. But if you take time to tune your code and your database and design it so it can scale, you can efficiently use applications like mysql.
Anyway, back to my original request. You can use the "transport_maps" feature to dynamically lookup lmtp transports on a per account basis. I have figured it out, and for those that are curious I will post when I've finished documenting everything.
-matt
Matt Shields wrote:
Anyway, back to my original request. You can use the "transport_maps" feature to dynamically lookup lmtp transports on a per account basis. I have figured it out, and for those that are curious I will post when I've finished documenting everything.
I thought the ltmp transport was local by definition. How does that work when you need delivery to happen on a different host?
Les Mikesell wrote:
Matt Shields wrote:
Anyway, back to my original request. You can use the "transport_maps" feature to dynamically lookup lmtp transports on a per account basis. I have figured it out, and for those that are curious I will post when I've finished documenting everything.
I thought the ltmp transport was local by definition. How does that work when you need delivery to happen on a different host?
lmtp was to save having to queue the email on the delivery box and then deliver via a local lda.
ltmp is kind of like using smtp to talk directly to the lda.
On Wed, 24 Oct 2007 23:47:00 -0400 "Matt Shields" mattboston@gmail.com wrote:
But, since numerous people have stated how mysql is inadequate to do what we want to do or in general for any task. We currently use mysql in a replicated environment with LVS to balance the connections for our main websites that is all dynamic. Last time I checked we were sustaining thousands of visitors per second 24 hours a day, which equaled about 3-4 thousand queries per second.
Coming late into this thread ... but some firsthand expirience from 300k pfix/cyrus webmail system: everything in mysql (webmail stuff, all pfix lookups, some other things) and mysql machine (nothing special, dual xeon, two mirrored disks) was picking its nose most of the time with load less than 0.3.
I think the main problem when people say you shouldn't use this product or that product because it's not good enough is they haven't set it up properly. They haven't taken the time to tune the server, the daemon, and the application.
Fully agree.
Anyway, back to my original request. You can use the "transport_maps" feature to dynamically lookup lmtp transports on a per account basis. I have figured it out, and for those that are curious I will post when I've finished documenting everything.
Sure you can. Isn't it obvious? :)
Matt Shields wrote:
Heck, I see lots of circles where they wouldn't trust mysql for an enterprise application so it seems clear that you are not talking about stability or performance but rather familiarity and the amount of trust you have in what you know.
I would expect openldap to blow the doors off a mysql db but what do I know? I deal in circles < 100 user accounts (small businesses).
Wow it's amazing how off topic and how many opinions you get on a mailing list, when all you wanted to know was how do I specially do this or that. That's why I stated what my environment was.
But, since numerous people have stated how mysql is inadequate to do what we want to do or in general for any task. We currently use mysql in a replicated environment with LVS to balance the connections for our main websites that is all dynamic. Last time I checked we were sustaining thousands of visitors per second 24 hours a day, which equaled about 3-4 thousand queries per second.
So, if it can handle that load and Google trusts it in their infrastructure, then I'm not gonna replace it. It does what I need, it's reliable, it's fast and it has proven that it scales well.
I think the main problem when people say you shouldn't use this product or that product because it's not good enough is they haven't set it up properly. They haven't taken the time to tune the server, the daemon, and the application. Let's face it anyone can write a query to a database (like "select * from table") and if you put enough load behind it your performance is gonna suck no matter what your app or database is. But if you take time to tune your code and your database and design it so it can scale, you can efficiently use applications like mysql.
Anyway, back to my original request. You can use the "transport_maps" feature to dynamically lookup lmtp transports on a per account basis. I have figured it out, and for those that are curious I will post when I've finished documenting everything.
Requoting my first reply in this thread (some typos corrected):
=========== [transport] an alternative is to use (per-user) transport_maps. something like
foo@example.com relay:[hostN.example.com]
In both approaches, the mappings can be generated using sql statements (mostly CONCAT). something like ... query = SELECT concat('relay:[', host, '.example.com]') FROM User where '%u' = user and '%d' = domain
you get the idea I hope.
===========
Heck, I see lots of circles where they wouldn't trust mysql for an enterprise application so it seems clear that you are not talking about stability or performance but rather familiarity and the amount of trust you have in what you know.
Let's see, mysql crashes (elcheapo hardware, happens once in a while) but tables containing hundreds of thousands of rows survive intact on reboot. Could you do that with postgresql? Nah. Did I mention you can just copy myisam files to another box and even if it has another OS so long as they are on the same cpu platform and use it without trouble? Add solid replication and hey, it is had to beat for the price (free). mysql is stable within the limits it can handle. It can run for months without trouble. A dual PIII box with 1GB of RAM, can handle the peak load of 6 postfix boxes that are configured to handle 800 simultaneous connections (okay, most of those connections rarely made it to the check user stage so let's put ten percent as successful: 800*6*3[bare minimum in the modified postfix]*0.1 = 1440) and not break a sweat only chewing 10-20% of available cpu resources. Mind you, this is only possible due to postfix proxy connection pooling otherwise mysql will be bringing the box to its knees if postfix had to open and close tcp connections for each set of queries.
I guess I should try to make a test against openldap/fedoraDS and see how they fare.
I would expect openldap to blow the doors off a mysql db but what do I know? I deal in circles < 100 user accounts (small businesses).
Yeah, seeing mysql in action in a service provider that now handles over 40 million mailboxes (over 30 million when I joined and worked there) sure puts a few points in for it in simple table environments.
Christopher Chan wrote:
Heck, I see lots of circles where they wouldn't trust mysql for an enterprise application so it seems clear that you are not talking about stability or performance but rather familiarity and the amount of trust you have in what you know.
Let's see, mysql crashes (elcheapo hardware, happens once in a while) but tables containing hundreds of thousands of rows survive intact on reboot.
Mysql is OK if you don't really need a relational database - particularly if you can put everything in a single table at least for the frequent queries.
Could you do that with postgresql? Nah.
I don't recall ever having a problem with postgresql.
Did I mention you can just copy myisam files to another box and even if it has another OS so long as they are on the same cpu platform and use it without trouble?
Don't see why that would be a problem for postgresql either as long as the database wasn't running when you copied the file and the posgresql revs were similar.
I guess I should try to make a test against openldap/fedoraDS and see how they fare.
Even though I posted those performance benchmarks, I'd want to do some serious testing before trusting it. I've had my share of problems with things based on Berkeley DB too, but perhaps those problems are fixed now.
Les Mikesell wrote:
Christopher Chan wrote:
Heck, I see lots of circles where they wouldn't trust mysql for an enterprise application so it seems clear that you are not talking about stability or performance but rather familiarity and the amount of trust you have in what you know.
Let's see, mysql crashes (elcheapo hardware, happens once in a while) but tables containing hundreds of thousands of rows survive intact on reboot.
Mysql is OK if you don't really need a relational database - particularly if you can put everything in a single table at least for the frequent queries.
Which is why I put 'simple table environment' in my comment.
Could you do that with postgresql? Nah.
I don't recall ever having a problem with postgresql.
I guess the latest versions are more crash resilient. But still no builtin replication.
Did I mention you can just copy myisam files to another box and even if it has another OS so long as they are on the same cpu platform and use it without trouble?
Don't see why that would be a problem for postgresql either as long as the database wasn't running when you copied the file and the posgresql revs were similar.
For postgresql, you have to copy everything. For mysql, you can do individual tables if you are using myisam tables.
I guess I should try to make a test against openldap/fedoraDS and see how they fare.
Even though I posted those performance benchmarks, I'd want to do some serious testing before trusting it. I've had my share of problems with things based on Berkeley DB too, but perhaps those problems are fixed now.
If I do it, it would be just for my interest only as I no longer work for that service provider.
Christopher Chan wrote:
I don't recall ever having a problem with postgresql.
I guess the latest versions are more crash resilient. But still no builtin replication.
This has gotten far afield of CentOS, but recent vintages of Postgresql DO support replication. :)
http://www.postgresql.org/about/
Best,
Chris Mauritz wrote:
Christopher Chan wrote:
I don't recall ever having a problem with postgresql.
I guess the latest versions are more crash resilient. But still no builtin replication.
This has gotten far afield of CentOS, but recent vintages of Postgresql DO support replication. :)
Yes but not builtin...the docs on 8.3beta talk about using third-party solutions to replication.
Christopher Chan wrote:
The service provider I used to work for tried openldap in 98. They got burned big time. Maybe it is up to the task today. What kind of hardware, though, would you use for one that the OP indicates will get a lot of writes? Everything I have read says LDAP is not for high write problems.
1998 was a long time ago. Red Hat (fedora) directory server has claimed good performce for several years now. http://directory.fedoraproject.org/
Yeah, well, I guess the Fedora Directory server is unlikely to drop its entire datastore and will actually keep running but hey, are you going to migrate back to ldap if you have a system that is distributed across different mysql boxes running on cheap boxes and does its job?
Yes, I've had enough trouble with mysql that I'd look for any alternative, but to be far that was a few years back too.
But the openldap guys think they are better - see page 33 of the pdf linked from this page: http://www.mail-archive.com/ldap@umich.edu/msg01151.html (22000 queries/sec, 4800 updates/sec on a terabyte database with 150 million entries - but I think the test box had 480Gigs of RAM...)
There you go. If you have the hardware, you can do openldap. 480Gigs? Did you add an extra zero?
I copied it from this email post. http://www.redhat.com/archives/fedora-directory-users/2007-July/msg00113.htm...
Does anyone have enough faith in a free NFS server to use it in this scenaro these days? How about opensolaris on top of zfs?
I would say. No comment on opensolaris in this scenario but I am happy with zfs as an offsite online backup solution.
Are you using the incremental send/receive operation for this?
Huh? This is just rsync for the vpopmail maildir, user home directories, pervasive database files and scp for an Exchange backup file and then snapshotting on the zfs volume for the vpopmail and user home directories. Nothing heavy. What is this incremental send/receive operation that you are talking about?
zfs has the ability to make filesystem snapshots, then back them up with a send/receive operation. See bottom of this page http://docs.sun.com/app/docs/doc/819-5461/ftyxi?a=view. I haven't used it myself but it sounds handy.
Yeah, well, I guess the Fedora Directory server is unlikely to drop its entire datastore and will actually keep running but hey, are you going to migrate back to ldap if you have a system that is distributed across different mysql boxes running on cheap boxes and does its job?
Yes, I've had enough trouble with mysql that I'd look for any alternative, but to be far that was a few years back too.
A few years back was when I worked with mysql in a multi million mailbox environment with dual PIII mysql servers. Once I stuffed the customized sendmail in the bin and replaced it with postfix and its connection caching capabilities, those mysql boxes became the most stable part of the system. See my post to Craig. These are versions 3.xx and 4.0.x
But the openldap guys think they are better - see page 33 of the pdf linked from this page: http://www.mail-archive.com/ldap@umich.edu/msg01151.html (22000 queries/sec, 4800 updates/sec on a terabyte database with 150 million entries - but I think the test box had 480Gigs of RAM...)
There you go. If you have the hardware, you can do openldap. 480Gigs? Did you add an extra zero?
I copied it from this email post. http://www.redhat.com/archives/fedora-directory-users/2007-July/msg00113.htm...
Well, that is some serious hardware. My experience was with dual PIII and later P4 based Xeons...
Does anyone have enough faith in a free NFS server to use it in this scenaro these days? How about opensolaris on top of zfs?
I would say. No comment on opensolaris in this scenario but I am happy with zfs as an offsite online backup solution.
Are you using the incremental send/receive operation for this?
Huh? This is just rsync for the vpopmail maildir, user home directories, pervasive database files and scp for an Exchange backup file and then snapshotting on the zfs volume for the vpopmail and user home directories. Nothing heavy. What is this incremental send/receive operation that you are talking about?
zfs has the ability to make filesystem snapshots, then back them up with a send/receive operation. See bottom of this page http://docs.sun.com/app/docs/doc/819-5461/ftyxi?a=view. I haven't used it myself but it sounds handy.
Single OpenSolaris box. I don't have an exciting environment anymore :-(
On Wed, 2007-10-24 at 21:21 +0800, Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
---- last I checked, openldap, postfix and cyrus-imapd were free. What is the money reference?
cyrus-imapd doesn't use maildir but rather it's own methodology which is similar to maildir but keeps all the mail in it's own partition instead of users folders. It doesn't use system for quota management but has quota management built in. It seems much more sane and permits 'virtual users' which is/can be a virtue of ldap based accounts.
Craig
On Oct 24, 2007, at 1:38 PM, Craig White wrote:
On Wed, 2007-10-24 at 21:21 +0800, Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
last I checked, openldap, postfix and cyrus-imapd were free. What is the money reference?
cyrus-imapd doesn't use maildir but rather it's own methodology which is similar to maildir but keeps all the mail in it's own partition instead of users folders. It doesn't use system for quota management but has quota management built in. It seems much more sane and permits 'virtual users' which is/can be a virtue of ldap based accounts.
I'm guessing that "money" referred to the netapp filer Tony S
Craig White wrote:
On Wed, 2007-10-24 at 21:21 +0800, Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
last I checked, openldap, postfix and cyrus-imapd were free. What is the money reference?
The hardware, the hardware.
On Thu, 2007-10-25 at 02:26 +0800, Christopher Chan wrote:
Craig White wrote:
On Wed, 2007-10-24 at 21:21 +0800, Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
last I checked, openldap, postfix and cyrus-imapd were free. What is the money reference?
The hardware, the hardware.
---- probably takes a bit of hardware to support 100,000+ users
Craig
On Wed, Oct 24, 2007 at 10:38:41AM -0700, Craig White wrote:
On Wed, 2007-10-24 at 21:21 +0800, Christopher Chan wrote:
I thought the usual ways of doing this were to either use a high-performance NFS server (netapp filer...) and maildir format so you can run imap from any client facing server, or to keep the delivery host information in an LDAP attribute that you find when validating the address.
This is the 'I have the money' way of doing this ;-)
last I checked, openldap, postfix and cyrus-imapd were free. What is the money reference?
Last I checked, cyrus-imapd could not provide reliable service when the datastore was on NFS.