PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 22nd of October, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:19:01] <jtomasrl> what are the pro/con of these two approaches? https://gist.github.com/daa38f4475e7ad96774b
[00:48:42] <markfinger> Using Mongo 2.0.6 we've hit an issue where some of our test data is silently failing to be imported prior to the execution of our test suite. As an example of the data which works and fails: https://gist.github.com/983751cd6dab480f7043
[00:51:01] <markfinger> So one json file has all but one of it's records imported successfully, the one fail seems to happen silently and it never turns up in the collection. The test data itself was dumped to json after a successful import from a bson dump
[00:51:20] <markfinger> Are there any common causes for something like this to occur?
[02:28:44] <jtomasrl> is it possible to check if the attribute exists before using a select to get it?
[02:55:42] <nopcode> hey
[02:56:19] <nopcode> mongodb crashes after many inserts on my vserver
[02:56:46] <nopcode> Mon Oct 22 04:47:15 [clientcursormon] mem (MB) res:706 virt:2139 mapped:976
[02:56:46] <nopcode> Mon Oct 22 04:47:50 [conn1] ERROR: 13601 Couldn't remap private view: errno:12 Cannot allocate memory
[02:56:49] <nopcode> Mon Oct 22 04:47:50 [conn1] aborting
[02:56:52] <nopcode> mem info: vsize: 1627 resident: 800 mapped: 976
[02:57:47] <nopcode> this is version 2.0.7 i believe
[02:59:18] <nopcode> this happens while inserting items
[03:00:52] <fonrithirong> hi, i'm getting a "Object reference not set to an instance of an object." when i do insertbatch (c#) with thread. when i do it without thread, everything is fine. can someone help me? thank you
[03:11:46] <nopcode> oh it seems its an openvz issue...
[04:04:28] <fonrithirong> now i'm getting "MongoDB.Driver.MongoConnectionException: Unable to connect to a member of the replica set matching the read preference Primary " but replication set isn't running
[04:24:12] <fonrithirong> ok, it dissapears when i remove all connectionstring host and only give it one host
[04:30:56] <chovy> howdy
[04:31:22] <chovy> looking for some advice. I have a user, which can create a list, and then add items to that list.
[04:33:04] <chovy> I want however, to expose all items created as an option to other users. They can either use an existing item and add to their list, or create a new one. The trouble I'm having how to best display this option to the user. And if a user should be able to edit an item someone else created.
[07:13:23] <samurai2> hi there, how do I put the result of mongodump --out into /folder that on another ubuntu machine? thanks :)
[07:14:13] <samurai2> I mean the result of mongodump command generated in another ubuntu machine folder. thanks :)
[07:35:04] <[AD]Turbo> hola
[07:45:32] <NodeX> msg nickserv id 32o49lkj2lkjlisdfj.mk42984309823jlkoji2y342o143841p9mfmukmwqnr8924p3y1
[07:46:13] <ppetermann> nice password
[07:47:28] <NodeX> oops
[07:47:44] <NodeX> keyboard headbutt password
[08:03:55] <NodeX> lolol
[11:34:38] <dob_> Are you storing the gender as Boolean or String?
[11:36:06] <ron> who's 'you'?
[11:38:20] <NodeX> LOL
[11:39:12] <ron> as for 'us', we save gender as a string.
[11:39:44] <NodeX> I would imagine that's what most sane people do
[11:39:48] <NodeX> (saving space and all that)
[11:39:56] <NodeX> and efficient code
[11:42:22] <tobeplugged> and political correctness
[11:43:31] <NodeX> yeh, hermaphrodites get a raw deal with Boolean
[11:45:40] <dob_> ron: You means * ;-)
[11:47:00] <dob_> okay, thank u. Store it now as String/enum. Stored it as Boolean in a earlier project as I did in mysql, but the question is always if men or women should be zero ;-)
[11:50:09] <ron> it's all about verbosity. I just prefer that if someone looks at the database, that they can read and understand the info there. no need to be cryptic.
[11:50:22] <ron> plus we have the option for unspecified.
[12:33:50] <jasiek> does map/reduce actually do anything in parallel?
[12:57:48] <remonvv> get the best of both worlds and store as "M" or "F"
[12:57:55] <remonvv> 7 bytes storage and still human readable
[12:58:10] <NodeX> is a lowercase "m" smaller than an uppercase?
[12:58:17] <remonvv> Yes, it saves 0.3 bits
[12:58:22] <remonvv> So that really adds up
[12:58:26] <NodeX> use "m/f" ;)
[12:58:28] <NodeX> lololol
[12:58:38] <remonvv> But it's rounded up per byte so you know
[12:58:45] <remonvv> So 7.7 bits become 8 bit.
[12:59:22] <remonvv> If you store "m/f" you still don't know the gender dude :s
[13:00:16] <NodeX> that's for hermaphrodites
[13:00:25] <NodeX> or you could store "bob"
[13:00:29] <NodeX> (bit of both) LOLOL
[13:01:32] <NodeX> hmm, 8000 uniques today so far and a 0.44% bounce rate, not to bad
[13:01:48] <remonvv> Context?
[13:02:06] <NodeX> job board
[13:02:39] <NodeX> 8 pages per visit avg too, pretty sweet
[13:05:42] <remonvv> No idea if that's good or bad ;)
[13:10:44] <NodeX> well the users go to on average 8 pages for each view
[13:10:55] <NodeX> I would say to keep a user engaged to 8 pages is good
[13:11:18] <ron> remonvv: well, we store M, F or U :)
[13:11:48] <NodeX> I dont store it because I'm not a male showvanist (spelling??) pig Lol
[13:13:28] <ron> well, it's a product requirement
[13:14:01] <NodeX> 5 years ago in the UK they made it illegal to put on a job requirement the age of someone
[13:14:20] <NodeX> i/e "Must be aged 18+" is illegal in the UK as it's ageist
[13:15:32] <remonvv> Isn't insulting people also illegal over there these days? I saw mr. Bean on tv
[13:15:51] <kali> NodeX: chauvisist
[13:15:58] <kali> NodeX: chauvinist
[13:16:05] <NodeX> :D
[13:16:12] <NodeX> (it's my native language too)
[13:16:57] <NodeX> remonvv : I assume so, media is controlled by big business so I suppose they're allowed !!
[13:18:55] <kali> my former employer looked a lot like mr bean
[13:19:04] <kali> travelling in uk with him was... wierd
[13:19:22] <kali> and also weird
[13:19:37] <NodeX> :D
[13:19:42] <NodeX> did you get free stuff?
[13:20:09] <kali> nope. but he was signing autographs
[13:20:13] <NodeX> haha
[13:20:40] <kali> and he did not know who mr bean of mr atkinson was
[13:24:16] <NodeX> I go a feeling Rowen atkinson is a knight or something
[13:24:24] <NodeX> got *
[13:27:06] <kali> NodeX: can't see it mentionned on his wikipedia page
[13:28:53] <NodeX> I couldv;e sworn he was
[13:29:03] <NodeX> he;s been on telly for about 40 years or more
[13:31:29] <Zelest> is 2.0.x "usable" or should I try to move on to 2.2 asap if I plan to develop something new?
[13:32:13] <kali> NodeX: http://www.imdb.com/name/nm0000100/ 100 ! better than any award or official distinction :)
[13:34:47] <Zelest> also, is there any cache features or so for nginx regarding gridfs?
[13:35:42] <Vile> Zelest: I would recommend to move to 2.2
[13:38:35] <kali> Zelest: same here. 2.2.0 is good, and the new aggregation framework is a killer feature
[13:43:40] <Zelest> Ah
[13:43:48] <Zelest> shame it's not ported to FreeBSD yet. .:/
[13:43:54] <Zelest> or well, it's not in ports yet at least
[13:44:17] <Zelest> is it compiled the same way so one can just bumb the version number, or does it require more hacking than that?
[13:50:11] <emilsedgh> im using mongodb in a project, my first experience. somehow, i've got a feeling that im still having 'relational way of thinking'. for example, i've got a few models, like for example Artist and Track and Album. Here's an example Album of mine:
[13:50:13] <emilsedgh> http://paste.kde.org/577658/
[13:51:13] <emilsedgh> as you can see, i keep the relational data in a relational way: i keep the id's instead of objects. am i doing it allright?
[13:51:49] <kali> emilsedgh: it depends how your application will access the data
[13:52:49] <kali> emilsedgh: with this schema, a "natural" page with a list of track names and the author name will have to perform about a dozen lookups to the database
[13:52:59] <emilsedgh> well, my application is a node-based single page application. i've written models which are usable on both client and server with a single api. my models are called like this: album.getTracks(function(tracks){})
[13:53:21] <emilsedgh> kali: exactly. i always have to make multiple queries around to get data.
[13:54:03] <emilsedgh> thats why i've also written a populateAssociations(callback) which loads all associations of a model to help me so i wont get into nested callbacks.
[13:54:39] <kali> emilsedgh: yep, but it does not change the model. you still have to look at a dozen of document to get the information for this page
[13:55:15] <emilsedgh> that is right kali. should i instead whole objects?
[13:55:22] <emilsedgh> if yes, how should i keep them updated?
[13:56:02] <emilsedgh> should i instead *store* whole objects?
[13:56:15] <kali> emilsedgh: if you emmbed more information, you'll make the read easier, and the write more difficult. but in such an app, track names and author names change rarely, so it might be worth paying for expensive updates
[13:57:04] <kali> emilsedgh: most of us use asynchronous loop for dealing with cascading writes in that kind of cases, but you have to code it$
[13:57:32] <emilsedgh> i see
[13:57:41] <kali> emilsedgh: and keep in mind that unless you're re-developping amazon store, it's likely the naive approach will work find for quite a long time
[13:57:47] <kali> * fine
[13:58:30] <emilsedgh> well, i dont expect significant traffic at all. mostly im interested in doing it the right way.
[13:58:51] <kali> this sounds nearly religious to me :)
[14:01:24] <emilsedgh> now, here's another thing. i've got this model name Invoice: http://paste.kde.org/577670/
[14:01:55] <emilsedgh> i need to be able to write an interface which gets a date range and shows a report of sales for albums/tracks in that range.
[14:02:45] <kali> emilsedgh: you need an index on Invoice.date
[14:02:55] <kali> emilsedgh: and the aggregation framework may help you
[14:03:13] <emilsedgh> i've written a map/reduce based code to make that query. but since its a bit 'dynamic', i have to run that code everytime the page is requested.
[14:03:34] <kali> aggregation framework will be way faster than m/r
[14:03:39] <emilsedgh> but i've had the feeling that map/reduce based queries are not something you do regulary or dynamically, but something you do as nightly jobs.
[14:03:47] <kali> emilsedgh: exactly
[14:04:27] <emilsedgh> thanks, reading about aggregation framework now.
[14:09:57] <IAD> so, we are waiting new cool seminars from 10gen: https://education.10gen.com/
[14:10:06] <Zelest> mhm
[14:10:17] <Zelest> gridfs somehow scares me.
[14:10:28] <Zelest> I've always been told to "never ever use a database as a filesystem" ..
[14:10:37] <Zelest> now, mongo tells me the opposite :P
[14:11:37] <NodeX> Zelest : as with most things Mongo it depends on your use case
[14:11:47] <Zelest> yeah
[14:11:52] <IAD> Zelest: you forgot about sharding
[14:12:01] <Zelest> IAD, uhm?
[14:12:31] <Zelest> NodeX, is gridfs a sane storage solution for storing images for a webshop?
[14:12:42] <kali> Zelest: never believe statements starting with the word "never"
[14:12:52] <Zelest> haha ;)
[14:12:54] <Zelest> nice one :P
[14:13:12] <NodeX> Zelest it depends on the webshop
[14:13:13] <irc2samus> hi guys, the installation documentation for Ubuntu fails when importing the key for the repo
[14:13:24] <NodeX> how often they're requested etc etc
[14:13:34] <irc2samus> here: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
[14:14:05] <Zelest> NodeX, ah, my idea was to use some of the built-in cache features in nginx..
[14:14:39] <Zelest> as for the store i plan to build, it will probably get very little traffic.. but it's built as a "webshop solution" where I easily will be able to add more shops.. so in time, i might get tnos of stores
[14:15:08] <NodeX> you'll probably be okay then
[14:15:18] <Zelest> the reason i want gridfs for it, is scalability in the sense I can easily add another node (being, a mongodb node, php and nginx) ..
[14:15:19] <NodeX> Nginx will cache MRU and so will Mongo
[14:15:29] <NodeX> also Nginx has a gridfs module iirc
[14:15:32] <Zelest> ans availability, seeing the autofailover in mongo is lovely.
[14:15:39] <Zelest> MRU?
[14:15:43] <Zelest> yeah, it does.
[14:16:01] <kali> aggreed, a cache on top of a cache is probably a waste of memory
[14:16:07] <irc2samus> guys do you know how can I setup the 10gen repos on Ubuntu? and where should I report this error in the docs?
[14:16:16] <NodeX> Most Resource Used
[14:16:28] <Zelest> ah
[14:16:44] <Zelest> but, reading is a "problem" if done frequently?
[14:16:59] <Zelest> like, i understand it's not as fast as static files.. but is it a _huge_ overhead?
[14:17:25] <NodeX> not really
[14:17:30] <kali> Zelest: not as fast as files... i'm not sure about that
[14:17:40] <NodeX> it's a little because of the TCP
[14:17:43] <kali> Zelest: the only question is "is it in memory"
[14:17:52] <Zelest> true
[14:18:01] <Zelest> and the answer will most likely be no..
[14:18:12] <Zelest> seeing if I get a lot of stores and a lot of products (files)
[14:18:25] <Zelest> depends on how well it pays off ;)
[14:19:21] <Zelest> as for caching, I can most likely define the expire date on the files (HTTP) to like 1-2 hours..
[14:19:27] <IAD> Zelest: http://www.coffeepowered.net/2010/02/17/serving-files-out-of-gridfs/
[14:19:36] <Zelest> so a user never have to re-request the images.
[14:20:02] <Zelest> IAD, aah, nice one, cheers.
[14:23:00] <Zelest> so, with those numbers (and that hardware and all of course) it's 1k requests per second.. per nginx/mongodb node
[14:23:07] <Zelest> that will last for years for me.
[14:23:51] <IAD> Zelest: and all this operations in benchmark were from cache ...
[14:24:05] <emilsedgh> kali: thanks for helping me out dude :)
[14:39:33] <tomreyn> hi, i've got a rplicaset consisting of two nodes where both nodes think they are secondsary. one of them has correct data, the other has no data. only the one with correct data is running. how can i put them back together? whenever i try to run any commands to split them up or rejoin them i'm told these commands need to be run on the primary, but i currently have no primary.
[14:39:45] <tomreyn> so how can i get the running node back into primary state?
[14:43:47] <kali> tomreyn: http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/#replica-set-force-reconfiguration
[14:45:27] <tomreyn> thank you kali
[14:56:40] <tomreyn> this seems to have worked out well. now how can i inspect the progress of the recovery / resynch?
[14:58:58] <kali> tomreyn: look the secondary logs
[15:01:56] <tomreyn> hmm, I should have thought of this, thanks.
[15:04:01] <tomreyn> it doesn't exactly discuss the overall progress there, though
[15:06:47] <tomreyn> how much faster should i expect an rsync to be compared to a full mongo synch?
[15:07:02] <kali> tomreyn: better than nothing. a "du" on the directory can gives you a good idea too
[15:08:03] <tomreyn> sure
[15:08:31] <tomreyn> is it normal for the mongo shell to freeze on a secondary which is being recovered?
[15:09:21] <kali> tomreyn: recovering nodes are refusing connections yes
[15:09:23] <tomreyn> and does an arbiter need to have a full copy of the data?
[15:09:48] <remonvv> no, it doesn't have any data
[15:09:55] <kali> tomreyn: nope, the arbiter only votes
[15:09:59] <tomreyn> great, so little overhead there
[15:10:09] <kali> tomreyn: that's the whole point
[15:10:49] <tomreyn> yes, makes sense if it still has enough information to make a good vote then
[15:11:26] <tomreyn> wow, there are actully people who use thunderbird for IRC :)
[15:31:25] <Jt_-> when i try to connect a db with replica set, i must specify the primary node?
[15:33:03] <Jt_-> or exist an autofetch to find primary node?
[15:33:10] <Jt_-> i try to use pymongo
[15:35:02] <mulinux> hello
[15:36:53] <mulinux> i need a fast and scalable database engine for a dictionary with approximately 1000 000 entries and a lot of searching in it … is mongodb suitable for it?
[15:37:23] <ron> sure
[15:39:36] <NodeX> but define "searching"
[15:40:00] <mulinux> ron, thanks for answer and i assume it also possible to have search similar to "like" in mysql?
[15:40:12] <kali> MongoDB sucks at full text searching
[15:40:17] <ron> well, that's.. a bit more complicated.
[15:40:42] <ron> Solr/Elastic Search/That other crap is your friend.
[15:41:16] <IAD> full text search =) try sphinx
[15:42:01] <ron> yes, that's the other crap I was talking about :)
[15:42:38] <MongoDBIdiot> but mongodb is webscale
[15:42:48] <MongoDBIdiot> how can it sucks on stuff like "like"?
[15:42:49] <ron> oh, Idiot is right.
[15:43:08] <ron> now I wish MacYET was here :p
[15:59:34] <matubaum> I have a set of documents wich contain an array. I want to query all items containing all intems of an array. Example: http://pastebin.com/8L9m7k0U
[16:00:58] <matubaum> is there an easy way to do this?
[16:06:56] <NodeX> LOL
[16:07:23] <NodeX> SOLR / ES = better than sphinx (more feature rich and less buggy) fwiw
[16:08:34] <NodeX> tbh ES is a little green and buggy too
[16:09:00] <tomreyn> i've got another question. this two node replicaset have gone out of synch when soeme database operations were run on it. the last message before it reported being out of synch was "cursorid not found local.oplog.rs"
[16:09:12] <tomreyn> how can i prevent it going out of synch again?
[16:09:27] <tomreyn> could this be related to the NUMA issues?
[16:09:59] <kali> tomreyn: check if the replication log is big enough
[16:10:12] <tomreyn> kali: how much is big enough?
[16:10:49] <kali> tomreyn: well, if the lag between the two servers can get bigger than the replication log, your replication is dead
[16:11:12] <kali> tomreyn: and i don't know what numa is :)
[16:11:32] <kali> tomreyn: for the replication log, here, we try to have a few days worth of it
[16:13:02] <kali> tomreyn: depending on your rights rate, it may or may not be possible for you
[16:13:46] <tomreyn> re NUMA, this is a CPU feature. i'm referring to this message which is writtne to logs with the 10gen builds for Fedora 8/Centos 6:
[16:13:50] <tomreyn> Tue Sep 25 18:45:12 [initandlisten] ** WARNING: You are running on a NUMA machine.
[16:13:50] <tomreyn> Tue Sep 25 18:45:12 [initandlisten] ** We suggest launching mongod like this to avoid performance problems:
[16:13:51] <tomreyn> Tue Sep 25 18:45:12 [initandlisten] ** numactl --interleave=all mongod [other options]
[16:14:21] <ron> pastebin.... use pastebin...
[16:14:31] <tomreyn> kali: do you mean s/rights rate/write rate/?
[16:14:42] <tomreyn> ron: for 3 lines?
[16:14:49] <kali> tomreyn: just how many writes there is per second
[16:14:55] <ron> for over 2 lines, yeah.
[16:15:06] <tomreyn> ron: okay, will do in the future, sorry.
[16:15:12] <ron> no worries
[16:15:40] <tomreyn> kali: are you asking me? if you are, you'd need to tell me how to measure it, too ;)
[16:16:28] <kali> tomreyn: nope. it's just that the bigger this write rate, the bigger your log needs to be to store enough time
[16:17:04] <tomreyn> both servers are on the same network, there shouldn't be network outages or latency > 10 ms there.
[16:18:20] <ron> shouldn't or isn't?
[16:18:33] <tomreyn> i guess we need better monitoring
[16:19:18] <tomreyn> i can see that there was communication between both nodes just 30 seconds before this event occurred, though
[16:21:10] <tomreyn> is "replication log" == "oplog" ?
[16:21:26] <kali> tomreyn: yes
[16:21:35] <tomreyn> thanks
[16:23:45] <tomreyn> hmm, a file called local.oplog.rs doesn't exist currently, but that's after i reinitiated the synch.
[16:24:17] <tomreyn> i do have local.{0,3} and local.ns though
[16:33:05] <tomreyn> so what's the filename of the oplog? it's stored on disk, isn't it?
[16:47:01] <tomreyn> to test whether automatic failover works, can i just kill the mongod process on the primary, or is this not advisable?
[16:48:28] <NodeX> yer
[16:48:38] <NodeX> it's fine
[16:48:44] <NodeX> (stop not kill)
[16:51:14] <tomreyn> well stop would just simulate an orderly exit
[16:51:45] <NodeX> yes but you're risking data for no reason
[17:08:44] <eka> hi all
[17:51:40] <Zelest> do one need to add indexes and such for gridfs or does mongo handle that magically?
[17:53:57] <tomreyn> ka1i, r0n, N0deX: hey, i forgot to say "thank you" for all the help you gave me earlier, so *thank you very much!*
[18:01:19] <MongoDBIdiot> why should mongodb create indexes for you other than then _id index?
[18:15:41] <iamchrisf> Any idea when the 10gen jira site is coming back online? Our site is down and we can't contact support.
[18:16:16] <iamchrisf> 10gen really should have a support #
[18:16:45] <heaven_> its down here too
[18:16:56] <heaven_> maybe too much traffic for the courses :P
[18:17:10] <iamchrisf> I think it has to do with the EC2 outage.
[18:17:24] <heaven_> hmm maybe
[18:17:33] <heaven_> though i am still setting up mongo
[18:17:39] <heaven_> on my laptop
[18:17:46] <iamchrisf> Our replica set is dead in the water. Primary is unavailable and no failing over.
[18:18:29] <iamchrisf> we have 4 members across 4 AZs Yet our primary fails and mongo takes a dump.
[18:18:29] <iamchrisf> I can't even reconfigure the damn rs because the primary is down.
[18:19:08] <heaven_> hmm kind of sucks :/
[18:19:10] <heaven_> :|
[18:19:30] <heaven_> was looking forward to the classes but i guess later now
[18:21:39] <iamchrisf> Well if they can't handle an AZ failure in EC2 you might want to start looking at Riak or cassandra :)
[18:22:47] <WarDekar> hey so i had a question about mongo- I'm using pymongo, but this should apply to mongo as a whole as well: I have a new dictionary object that I want to *update* a mongodb object with, but the .update operator will replace the current object with the new one, and the $append will only work with lists, what I want to accomplish is pass a new object in, and if the elements are a list, it'll append, and if it's not a list it'll replac
[18:22:51] <WarDekar> is this possible currently?
[18:23:40] <NodeX> upsert ?
[18:23:49] <NodeX> I think an upsert is what you're after
[18:24:22] <NodeX> or perhaps you want $addToSet if you're list is an array
[18:26:15] <WarDekar> no upsert doesn't work, upsert will find based on a filter, update if it exists or inserts if it doesn't
[18:26:38] <WarDekar> like say I currently have this object {'a': 1, 'b': [2]}
[18:27:09] <WarDekar> i want to "update" with this object: {'b': [3] ,'c': 4}
[18:27:20] <WarDekar> the new object should be {'a': 1, 'b': [2,3], 'c': 4}
[18:27:44] <NodeX> $addToSet is what you want thenm
[18:27:46] <NodeX> then*
[18:28:15] <WarDekar> okay let me play around with that, what would happen if i would pass in {'a': 4, 'b': 3} ?
[18:28:23] <WarDekar> in my case i would want it to just replace the 'a' key
[18:28:35] <WarDekar> and append the 'b' key, sorry I meant 'b': [3]
[18:28:42] <WarDekar> if i had passed in 'b': 3 i would expect it to replace
[18:29:01] <NodeX> addToSet adds if not there
[18:34:55] <WarDekar> how does $inc work without locking?
[18:35:11] <WarDekar> like say i have 4 processes all calling $inc on the same field
[18:35:18] <WarDekar> am i guaranteed to get proper results?
[18:36:08] <NodeX> no
[18:36:12] <NodeX> mongo is fire and forget
[18:36:19] <WarDekar> yeah that's what i figured
[18:36:34] <NodeX> eventual consitency so you will get 3 counts eventualy
[18:36:35] <WarDekar> also addtoSet seems to only be for arrays i'll try it and see but i'm not sure it has the functionality i'm hoping for
[18:36:49] <NodeX> you siad "'b': [3]"
[18:36:54] <NodeX> [3] is an array
[18:37:22] <WarDekar> yeah but i want {'a': 1, 'b': [2]} + {'b': [3] ,'c': 4} = {'a': 1, 'b': [2,3], 'c': 4}
[18:37:23] <IAD> $inc with $set will work correctly
[18:37:49] <NodeX> WarDekar : that's what addToSet does
[18:38:04] <NodeX> it "Adds it if not there"
[18:40:36] <WarDekar> NodeX: But what ends up happening in that case, it makes 'c' an array as well, and also if I pass in {'a': 4, 'b': 3} it throws an error because 'a' isn't a set, I just want it to replace 'a' in that case
[18:40:59] <WarDekar> i understand if these things aren't possible without writing custom code but that's the functionality i'm hoping for so trying to find out if it's possible first
[18:41:12] <NodeX> db.foo.update({criteria},{$addToSet:{b:3}});
[18:42:05] <NodeX> I see where you might have the problem
[18:42:07] <NodeX> "Adds value to the array only if its not in the array already, if field is an existing array, otherwise sets field to the array value if field is not present. If field is present but is not an array, an error condition is raised."
[18:42:30] <NodeX> "not an array", you didn't sepcify yours was not an array
[18:43:42] <WarDekar> but addToSet also will not update another field, some fields are arrays others aren't, i just want to be able to pass a new dictionary object in, if the key exists, overwrite the value *unless* it's an array in which case append it, if the key doesn't exist, create it
[18:44:35] <NodeX> of course it wont update another field upon a condition if it's in the array or not
[18:48:27] <pringlescan> does anyone know how to use .hint() with Mongoose?
[18:49:17] <iamchrisf> Guys how do you promote a secondary to primary when the primary is unavailable?
[18:49:39] <iamchrisf> This EC2 outage has borked our replica set.
[18:49:55] <NodeX> it will auto promote after time
[19:04:57] <iamchrisf> NodeX: It's been over an hour
[19:06:09] <kali> iamchrisf: //docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
[19:06:12] <kali> +http
[19:07:38] <iamchrisf> kali: file not found. docs were down last I checked.
[19:08:48] <kali> http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
[19:08:51] <kali> oups
[19:09:05] <kali> great.
[19:09:45] <kali> iamchrisf: connect to the remaining node, do "c = rs.conf", edit c to discard the broken nodes, rs.reconf(c, { force:true})
[19:14:34] <iamchrisf> kali: exactly what I just did. Thx just got our site back online.
[19:21:15] <rickibalboa> Hi, I have a question, I'm accessing a record using findOne, and I'm also wanting to retrieve the previous and the next records, however I cant do a find({my_record_query, '$or' prev_record_query}) < not exact syntax etc, I can't do that because I don't know exactly how to find the record previous to one im looking for.
[19:34:37] <bhosie> so i'm looking for some ideas on upserting millions of records at a time quickly. I'm using a replica set and i've had to slow down the inserts so i don't crash my secondaries. i considering using the mongoimport utility but the docs seem to say that's a bad idea. Docs say journaling + replica set means 4 writes per insert. i'm considering turning journaling off. tell me why this might be a bad idea
[20:19:05] <ruffyen> so education.10gen.com is donw is that a known issue?
[20:22:40] <meghan> ruffyen it should be up by end of day
[20:33:17] <ruffyen> alright just wanted to bring it to someones attention :) its been down for a little bit hehe
[20:33:21] <ruffyen> thansk!
[20:33:22] <ruffyen> thanks*
[20:35:23] <neurodrone> Hey guys, does anyone know if for the Java driver we have an option to connect in a non-replset mode?
[20:36:21] <skot> yes, just connect without the list<Server> constructor
[20:36:38] <skot> So create a Mongo("server") instance
[20:50:08] <neurodrone> skot: Thanks for you help!
[20:50:13] <neurodrone> Works like that for me
[21:03:33] <meghan> FYI to ruffyen and others http://blog.10gen.com/post/34116677312/status-update-on-10gen-education
[21:20:10] <addisonj> hrm... anyone know how to get 10gen consultant people to contact any quicker? Sent out a request like 6 days ago... :\
[21:21:27] <Oddman> call them?
[21:33:33] <chovy> 10gen was supposed to do a mongo class today, but their site is down.
[21:33:46] <chovy> https://education.10gen.com/dashboard
[22:03:56] <konr_trab> How can I create a negating regex similar to `{"fruit":{"$not":{"$regex":"avocado","$options":"ims"}}}`? This query yields an error message, asking me to use a BSON regex type
[22:12:35] <meghan> chovy, please see blog.10gen.com for a status update
[22:12:40] <meghan> and sorry for the inconvenience!
[22:12:54] <meghan> addisonj, shoot me an email (meghan @ 10gen) and i will look into it
[22:13:31] <chovy> meghan: thanks
[22:19:09] <michaeltwofish> Anyone care to help me interpret an MMS graph? http://screencast.com/t/lTcSXgjYJ
[22:40:32] <michaeltwofish> MongoDB appears to be running normally, but graphs suddenly changing dramatically when there was unlikely to be a significant load change makes me ... curious. And I'm no sysadmin/dba :)
[23:12:44] <addisonj> meghan: just sent an email, thanks for the help!
[23:42:14] <BurtyB> Using $size on an array with elements of different types (an array and integer) doesn't seem to work is that to be expected?