PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 26th of June, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:01:08] <b0c1> hi
[07:37:34] <[AD]Turbo> ciao all
[09:59:54] <Milos_> hi guys, a total newbie here, I want to ask a simple question
[09:59:56] <Milos_> !help
[09:59:56] <pmxbot> !8ball (8) !acronym (ac) !anchorman !annoy (a, bother) !bender (bend) !bitchingisuseless (qbiu) !blame !bless !boo !bottom10 (bottom) !calc !chain !cheer (c) !c
[09:59:56] <pmxbot> ompliment (surreal) !config !ctlaltdel (controlaltdelete, controlaltdelete, cad, restart, quit) !curse !dance (d) !deal !define (def) !demotivate (dm) !disembow
[09:59:56] <pmxbot> el (dis, eviscerate) !duck (ducky) !embowel (reembowel) !emergencycompliment (ec, emercomp) !excuse (e) !featurecreep (fc) !fight !flip !fml !gettowork (gtw) !g
[09:59:56] <pmxbot> olfclap (clap) !google (g) !googlecalc (gc) !grail !haiku !hal (2001) !hangover !help (h) !hire !imotivate (im, ironicmotivate) !insult !job (card) !karma (k) !
[09:59:57] <pmxbot> keelhaul (kh) !klingon (klingonism) !log !logo !logs !lunch (lunchpick, lunchpicker) !meaculpa (apology, apologize) !motivate (m, appreciate, thanks, thank) !mu
[09:59:57] <pmxbot> rphy (law) !nailedit (nail, n) !nastygram (nerf, passive, bcc) !norris () !notify !oregontrail (otrail) !panic (pc) !password (pw, passwd) !paste !pick (p, p:,
[09:59:57] <pmxbot> pick:) !progress !quote (q) !r (r) !resolv !roll !rubberstamp (approve) !saysomething !simpsons (simp) !stab (shank, shiv) !storytime (story) !strategy !strike
[09:59:58] <pmxbot> !tgif !therethere (poor, comfort) !ticker (t) !time !tinytear (tt, tear, cry) !top10 (top) !troutslap (slap, ts) !urbandict (urb, ud, urbandictionary, urbandefi
[09:59:58] <pmxbot> ne, urbandef, urbdef) !version (v, e, r) !weather (w) !where (last, seen, lastseen) !wolframalpha (wa) !zinger (zing) !zoidberg (zoid)
[10:00:09] <Milos_> hi guys, a total newbie here, I want to ask a simple question
[10:00:21] <Milos_> I have two collections: posts and comments
[10:00:50] <Milos_> obviously, comments are comments to posts, and each document in comments has an _id of the post it comments to
[10:01:15] <Milos_> so, is there a way to retrieve an array of posts which also contains subarrays of comments?4
[10:16:20] <idank> is it possible to forbid access for the readWrite role to the drop() command?
[10:29:47] <Nodex> Milos_ : not in one query
[10:30:25] <Milos_> Nodex, thanks, I kind of hoped it would be possible in MongoDB
[10:30:43] <Nodex> mongo doesn't support joins
[10:31:09] <Milos_> ok, so I first pull all posts, and then run a loop and for each post pull all comments?
[10:31:28] <Nodex> do you really need all comments all times?
[10:31:46] <Milos_> no, not all comments, of course, this is just a hypothetical example
[10:32:01] <Nodex> right so a better idea would be this.....
[10:32:05] <Milos_> I would pull something like 5 comments per post, and then pull more on request
[10:32:20] <Nodex> embedd the first X comments (5,10,25 w/e) with the post and save the query
[10:33:15] <Milos_> oh, yes, that's an interesting proposal, ingenious, indeed!
[10:33:36] <Nodex> ;)
[10:33:38] <Milos_> what do you mean "save the query"?
[10:33:55] <Nodex> sorry that's a bit confusing
[10:33:56] <Milos_> you mean I don't need to execute the second query if the comments are embedded
[10:34:04] <Nodex> I mean save as in you save having to do the query
[10:34:10] <Nodex> yes exactly as you said ;)
[10:34:13] <Milos_> ok, that's what I thought
[10:34:29] <Milos_> thanks a lot!
[10:34:34] <Nodex> no problemo
[10:37:34] <Milos_> of course, when inserting new comments, I first have to count if there are enough comments embedded witin post, and then if needed create a new comment document in comments collection
[10:38:06] <Milos_> but I think it's justified, because reading is by far more frequent operation than commenting
[10:56:26] <Nodex> !stab nodex
[10:56:47] <Nodex> !google MongoDB
[10:56:47] <pmxbot> http://www.mongodb.org/ - MongoDB
[10:57:01] <Nodex> !google nodex.co.uk
[10:57:02] <pmxbot> http://www.nodex.co.uk/ - Nodex
[11:35:37] <wieshka> MongoDB instance where rs.initiate() is called, will be as data source for replica set ?
[12:13:49] <wieshka> I am trying to migrate from Master->Slave to ReplicaSet with 3 members. When i reconfigure master to replset, and try to connect by application, i got: MongoError: Error: not master and slaveOk=false
[12:13:58] <wieshka> what I am doing wrong
[12:44:00] <harenson> wieshka: you have to execute rs.slaveOk() to read from a secondary machine in the replica set
[12:44:18] <wieshka> harenson, thx figured out already
[12:44:25] <harenson> wieshka: :D
[12:44:42] <harenson> wieshka: it looks your app was connecting to a secondary, not a primary machine ;)
[12:45:11] <wieshka> besides that, i was not able to initiate replica set because of incorrect /etc/hosts
[12:45:25] <wieshka> but now, replica members already syncing from primary
[12:45:34] <harenson> wieshka: great!
[12:45:41] <wieshka> now trying to feed that to my application
[12:45:51] <wieshka> i am using Deployd (node.js)
[12:46:30] <wieshka> trying to configure via this http://docs.deployd.com/docs/server?include=all to work with replica not primary only
[12:47:06] <harenson> wieshka: I don't use NodeJS
[13:07:00] <_izz_> hey guys, I'm having some problems with the auto_reconnect option in node.js MongoClient driver
[13:07:20] <_izz_> I specify { 'server': { 'auto_reconnect': true } } as the second argument to the connect method
[13:07:45] <_izz_> doesn't seem to do anything in terms of trying to reconnect though, only executes the callback..
[13:34:27] <wieshka> how it comes that all my replica nodes are now secondary
[13:34:30] <wieshka> and i have 3 nodes
[13:34:41] <wieshka> any help please, as this is live env
[13:38:39] <Nodex> 42
[13:50:00] <daslicht> https://gist.github.com/daslicht/54dfccba01c7eacc2c51
[13:50:22] <daslicht> I just had to setup a clean debian system with mongo
[13:50:32] <daslicht> but wehn i try to install it get this error
[13:50:50] <daslicht> any idea what is going on there ?
[13:51:20] <algernon> what repo did you install it from, and on which debian version?
[13:56:28] <daslicht> debian 7
[13:56:47] <algernon> and which repo did you use?
[13:56:52] <daslicht> http://docs.mongodb.org/manual/tutorial/install-mongodb-on-debian/
[13:56:58] <daslicht> i followed the documentation
[13:57:25] <daslicht> yesterday i had successfully installed mongo on a debian 7 server eier
[13:57:30] <algernon> so the sysvinit repo.
[13:58:00] <daslicht> either
[13:58:07] <daslicht> not eier lol
[13:58:56] <algernon> try running /etc/init.d/mongodb start by hand, perhaps with bash -x. also, checking the logs (/var/log/mongodb/mongodb.log) may help figuring out what the problem is
[13:59:35] <daslicht> ok
[13:59:45] <daslicht> whats that error about the locales?
[14:00:04] <daslicht> root@s17063954:/# /etc/init.d/mongodb start
[14:00:04] <daslicht> [FAIL] Starting database: mongodb failed!
[14:00:21] <algernon> that's only a warning and doesn't matter.
[14:00:28] <daslicht> the log is empty
[14:00:44] <algernon> try "bash -x /etc/init.d/mongodb start" then
[14:00:57] <algernon> that'll spew out quite a bit of output, but may be helpful
[14:01:08] <daslicht> failet
[14:01:11] <daslicht> failed
[14:01:16] <daslicht> ok
[14:01:24] <algernon> can you pastebin the output of that?
[14:01:37] <daslicht> https://gist.github.com/daslicht/751573699a95160703ed
[14:01:45] <daslicht> thank you for your time
[14:02:01] <daslicht> maybe i have to mention that this machine was initially installed with debian 6
[14:02:06] <daslicht> and i have upgraded it to 7
[14:02:40] <algernon> hmmm. interesting.
[14:03:09] <daslicht> ?
[14:03:17] <algernon> it starts, but looks like it stops pretty much immediately.
[14:04:09] <algernon> hm... is the database directory owned by mongodb user?
[14:04:20] <daslicht> i havent changed anything
[14:04:46] <daslicht> this?
[14:04:49] <daslicht> this? /usr/bin/mongod
[14:05:01] <algernon> also, you can try running mongod --config /etc/mongodb.conf by hand, and see if that works...
[14:05:11] <daslicht> bash: cd: /usr/bin/mongod: Not a directory
[14:05:14] <algernon> no, /var/lib/mongodb
[14:05:59] <daslicht> https://gist.github.com/daslicht/54a1f1b70d2f42494cd0
[14:06:30] <daslicht> ->> Wed Jun 26 16:05:42.543 [initandlisten] ** WARNING: You are running in OpenVZ. This is known to be broken!!!
[14:06:42] <daslicht> this looks not good
[14:06:57] <algernon> ah.
[14:06:59] <ron> haha
[14:07:15] <daslicht> what does that mean?
[14:07:18] <algernon> that explains a lot
[14:07:19] <daslicht> that my server suxx ?
[14:07:48] <algernon> pretty much, yes.
[14:07:57] <algernon> https://jira.mongodb.org/browse/SERVER-1121
[14:08:09] <daslicht> is that due to teh virtualisation?
[14:08:35] <algernon> it's due to OpenVZ. mongodb works fine in other virtualisation environments
[14:09:25] <daslicht> hm i though they use pest virtuozzo
[14:10:06] <algernon> that's based on openvz, iirc.
[14:10:11] <daslicht> grr
[14:10:22] <daslicht> so this server is useless for mongo ?
[14:10:36] <algernon> most likely, yes
[14:10:40] <daslicht> :(
[14:10:49] <daslicht> suxx
[14:10:50] <daslicht> ok
[14:10:53] <daslicht> thank you
[14:10:59] <daslicht> bbl
[15:16:17] <richthegeek> I have a system which will regularly (like, once every 20-100ms) need to get the first row from a sorted cursor... would it be ok to keep a cursor "open" for as long as possible, or would that not work?
[15:16:40] <richthegeek> eg, each time I need an object just call cursor.nextObject()
[17:10:53] <hems> Hello, when looking my "database" size, it seems waaaay bigger than the size of it's children collections summed up together.. is there a way to reduce that? seems like it keeps a history of all old records?
[17:13:30] <Nodex> you can compact but it's not advisable on a live database
[17:13:45] <hems> Nodex, but wouldn't compact reduce my read/write speed?
[17:13:52] <hems> Nodex, am still on staging, so its OK
[17:14:19] <Nodex> why on earth would compacting reduce read/write speed?
[17:14:26] <hems> but its weird, like it says my DB size is 192MB, and if i get inside of the database all the collections summed aren't no even 10mb
[17:14:37] <hems> its actually less than 5mb
[17:14:53] <Nodex> it's pre-allocated files
[17:15:05] <hems> Nodex, just an nonsense assumption from "compressed files". sorry about that
[17:15:09] <Nodex> http://docs.mongodb.org/manual/faq/storage/
[17:15:30] <hems> Nodex, how does you maintain your db sizes? do you compact often ?
[17:15:47] <Nodex> http://docs.mongodb.org/manual/faq/storage/#why-are-the-files-in-my-data-directory-larger-than-the-data-in-my-database
[17:16:00] <Nodex> no, I don't touch them, I have large harddrives and loads of RAM ;)
[17:16:21] <Nodex> if/when it becomes an issue I will compact / scale out
[17:16:58] <Nodex> I has to much already Zelest
[17:17:13] <Zelest> :D
[17:17:15] <Zelest> sup?
[17:17:37] <hems> Nodex, cool, thanks for the tips. i have seen the compact and repairDatabase commands but yeah, they seems a bit "overkilling", i'll let mongo do whatever it wants, and yeah, hope it doesn't become a problem in the future
[17:17:47] <Nodex> not a lot, just got back from Holiday :D
[17:17:48] <Nodex> you?
[17:17:56] <Nodex> hems, nps ;)
[17:18:11] <hems> Nodex, regarding your RAM movies, are they nice and beautiful lesbian midgets or just the average gay midgets?
[17:18:39] <Nodex> tiny cat midgets sorry
[17:18:52] <hems> Nodex, cats not my department.. i love hamsters tough
[17:18:58] <Zelest> haha, i love how I just destroyed your convo :D
[17:18:59] <Nodex> lol
[17:19:04] <Nodex> trolololol
[17:19:11] <Zelest> things are awesome here.. found a new lady :D
[17:19:14] <Zelest> and enjoying life basically
[17:19:17] <Nodex> nice, congrats
[17:19:23] <Zelest> cheers :)
[17:19:24] <hems> Zelest, does she use zsh ?
[17:19:38] <Zelest> she doesn't know anything about computers :P
[17:19:43] <Nodex> Bonus
[17:19:52] <Zelest> but she doesn't have any gag reflexes.. at all :o
[17:19:54] <Nodex> she can't argue about things then LOL
[17:19:58] <hems> Zelest, good then you don't even need to delete your private conversations on facebook
[17:20:05] <Zelest> haha
[17:20:10] <double_p> do the tits fit in RAM?
[17:20:11] <Zelest> my private convos on facebook are with her so :)
[17:20:15] <Nodex> privacy does not exist
[17:20:15] <hems> Zelest, and she has more time to clean
[17:20:17] <Zelest> rofl
[17:20:23] <Nodex> Prism has all your data mwhahahaha
[17:20:34] <Nodex> stored in MongoDB I read somehwere
[17:20:37] <Zelest> on a serious note.. i've done more fun stuff with her in bed than I've done with my ex-wife in 13 years.
[17:20:39] <hems> haha
[17:20:44] <Zelest> so yeah, quite of an upgrade there..
[17:20:50] <hems> Zelest, hahahaha
[17:20:54] <Zelest> gah, don't mention prism.. that whole thing annoys me :(
[17:21:00] <Nodex> this is a child friendly chan LOL
[17:21:05] <hems> Zelest, good.. its nice when girls like nutella
[17:21:18] <hems> where you guys based?
[17:21:19] <Zelest> nutella?! :o
[17:21:23] <double_p> lol
[17:21:29] <Zelest> how did you know that? :O
[17:21:33] <hems> Zelest, hahaha
[17:21:33] <Nodex> I just spent a week upgrading every app / site I've written to encrypt all uploaded data because my clients are all paranoid
[17:21:39] <Zelest> hems, it was seriously spot on ;)
[17:21:51] <Nodex> <---- UK unfortunatley
[17:21:55] <Zelest> Nodex, tell your clients they're retarded assnuggets :(
[17:21:58] <hems> Nodex, explain then they have quantum computers and no matter how much you encrypt they gonna catch him if they want
[17:22:07] <hems> Nodex, am in london, but from brazil
[17:22:07] <Zelest> seriously, this whole NSA/prism story is just retarded
[17:22:11] <Zelest> since WHEN is that even news?
[17:22:12] <Nodex> Zelest : I prefer it to be encrypted tbh
[17:22:20] <Zelest> yeah
[17:22:27] <Nodex> we all knew it was happening tbh
[17:22:33] <Zelest> yeah
[17:22:35] <Nodex> just a little wake up call really
[17:22:40] <Zelest> not really
[17:22:42] <Zelest> i mean
[17:22:43] <hems> Nodex, they will just push it to be legal next time they put any tower down
[17:22:44] <Zelest> if it's not them
[17:22:51] <mift> hi there
[17:22:53] <Zelest> it's some fat network admin being bored
[17:23:03] <Zelest> never ever assume anything is private
[17:23:05] <Zelest> mift, heya
[17:23:13] <hems> well, its probably on their agenda already. put a tower down, blame the internet and push it to be super legal and convince the americans its good for them
[17:23:18] <Nodex> if they want to waste CPU cycles decrypting my uploaded SSH keys then go right ahead, I got nothing to hide LOL
[17:23:33] <Zelest> Nodex, usually they do sociograms
[17:23:37] <Zelest> using ips
[17:23:42] <Zelest> the data itself is "meh"
[17:23:51] <Nodex> eventually they will kill the internet because nobody will trust it to shop
[17:24:05] <Zelest> the swedish NSA had cute openssl exploits 5 years before the bugs went public :P
[17:24:10] <Nodex> and then it will be back to what the internet started for ... PORN
[17:24:21] <Zelest> yeah, that's your only defence..
[17:24:28] <Zelest> make your history as scary as possible :D
[17:24:32] <Nodex> yeh LOL
[17:24:36] <Zelest> they see 5 links and don't dare go through the rest :D
[17:24:45] <Nodex> you see that thing about microsoft giving the NSA crypto keys?
[17:24:52] <Zelest> mhm
[17:24:55] <Nodex> basically backdoors into every copy of winblows
[17:25:04] <Nodex> (in the right hands)
[17:25:08] <Zelest> you know mac's safevault..
[17:25:20] <Zelest> you can decide to "store your keys at apple" in case you lose it/forgot your pass..
[17:25:27] <Zelest> I'm quite sure they're saved with NSA too ;)
[17:25:31] <Nodex> lmfao, how conveinient
[17:25:43] <Zelest> and I'm also pretty sure they already have some sort of master key or so
[17:25:46] <Nodex> why not bend over and lube up at the same time
[17:26:00] <Zelest> you make that sound like a bad thing ;)
[17:26:06] <Nodex> hah
[17:26:07] <Zelest> did i mention my new lady owns? :o
[17:26:11] <Zelest> wait... said too much..
[17:26:12] <Zelest> :D
[17:26:33] <Nodex> I can see some form of revolution coming in the next 10-15 years
[17:26:44] <Zelest> just wait until the oil runs out
[17:26:45] <Nodex> "the people" will get tired of all this crap
[17:26:46] <Zelest> \o/
[17:26:52] <Zelest> well
[17:26:55] <Zelest> "the people" don't care
[17:27:06] <Zelest> the 3% of us actually caring is a minority :/
[17:27:11] <Nodex> sad really
[17:27:14] <Zelest> mhm
[17:27:24] <Zelest> imagine things like facebook and goole
[17:27:24] <Nodex> human race been around for ~60k years
[17:27:25] <double_p> openssl is an exploit in itself.. try to code with it. or let alone try some "decent" openssl.cnf :P
[17:27:25] <Zelest> google
[17:27:28] <Zelest> you're always logged in
[17:27:29] <Nodex> it will be gone in 100
[17:27:34] <Zelest> and every site has analytics or facebook like buttons..
[17:27:39] <Zelest> imagine the referrer data on those
[17:27:43] <Zelest> logged to your profile
[17:28:01] <Zelest> quite accurate and complete browser history they got there :D
[17:28:10] <Nodex> on facebook i "Like" EVERYTHING
[17:28:18] <Zelest> no no
[17:28:20] <Nodex> try getting a profile from that f***heads
[17:28:21] <Zelest> i mean the loading of the button
[17:28:33] <Nodex> yeh I understand, I was pointing it out lol
[17:28:34] <Zelest> seeing what sites you visit
[17:28:38] <Zelest> ah :D
[17:28:57] <Nodex> facebook's main funder was JP MOrgan, that's no coincidence
[17:29:10] <Nodex> they saw the potential to profile people for profit years ago
[17:29:17] <Zelest> http://www.businessinsider.com/nsa-prism-keywords-for-domestic-spying-2013-6 :D
[17:29:19] <Zelest> happy mailing!
[17:29:30] <Nodex> yeh I saw that
[17:29:38] <Zelest> but then again, I don't blame NSA the slightest bit
[17:29:39] <mift> I need some help debugging a small piece of node.js/MongoDB
[17:29:50] <mift> Im new to this, so please be kind ^^
[17:29:52] <Nodex> mift, just post it
[17:29:53] <mift> http://pastebin.com/etW5QVka
[17:30:01] <Zelest> "here's our website.. share what you want" .. and people share EVERYTHING.. it's perfect for their agenda :P
[17:30:06] <Zelest> would be stupid of them not to log it
[17:30:16] <mift> I've got two records in the users collection
[17:30:24] <double_p> is this mongodb? (not taht i care, funny discussion)
[17:30:38] <Nodex> mift, what are you expecting to happen?
[17:30:45] <Zelest> dmarlow, humongous databases? check! :D
[17:30:48] <Zelest> err
[17:30:50] <Zelest> double_p*
[17:31:02] <Nodex> pwn3d
[17:31:03] <mift> Nodex: I'm expecting to get one result
[17:31:11] <Zelest> but yeah, a bit off-topic indeed :)
[17:31:40] <Nodex> mongo by default is $and
[17:32:00] <mift> Nodex yeh thats what I thought
[17:32:08] <Nodex> you can just do somehting like this .... db.foo.find({foo:"bar",_id:{$not:1234});
[17:32:19] <mift> Nodex: but since I didn't get a result the "default" way, I tried this one
[17:32:42] <mift> Nodex: ahhh, gonna try
[17:32:55] <double_p> Zelest: ah right.. just reminds me of some 1+TB oracle about flight data over germany.. mhmm.. damn those USB sticks havent been quite big enough back then - lol
[17:33:37] <Nodex> findOne * sorry
[17:50:42] <mift> Nodex: stillt there?
[17:51:01] <mift> http://pastebin.com/2u7d7mG4
[17:53:09] <mift> could someone please take a look at this? http://pastebin.com/2u7d7mG4
[17:53:13] <Nodex> http://docs.mongodb.org/manual/reference/operator/not/
[17:53:22] <Nodex> I had a syntax error sorry
[17:53:43] <Nodex> db.users.find( { av : true, _id : { $not : ObjectId("51cb2793b5adb324b4000002") } } )
[17:53:43] <mift> oh man, Im sorry
[17:53:59] <Nodex> infact no you did LOL
[17:54:31] <mift> hmm, Im getting this: error: { "$err" : "invalid use of $not", "code" : 13041 }
[17:55:13] <Nodex> http://docs.mongodb.org/manual/reference/operator/ne/
[17:55:15] <Nodex> try that inbstead
[17:55:27] <Nodex> db.users.find( { av : true, _id : { $ne : ObjectId("51cb2793b5adb324b4000002") } } )
[17:55:58] <mift> Nodex: yes, that worked!
[17:56:16] <mift> what the difference between $not and $ne?
[17:56:43] <Nodex> $not is on an operator, the docs are a little confusing
[17:56:53] <Nodex> $ne mean "no equal"
[17:56:58] <Nodex> not equal *
[17:57:27] <Nodex> whereas $not can be used to have something $not $gt 123 ( same as saying $lt )
[17:58:02] <mift> Nodex: ah, with $not you can invert the statement of a query?
[17:58:11] <Nodex> yeh basically
[17:59:47] <mift> Nodex: well thanks :)
[18:00:56] <hems> Nodex, have you ever cleaned your opLog ? http://docs.mongodb.org/manual/tutorial/change-oplog-size/
[18:01:12] <hems> Nodex, sounds like a lot of my data storage might actually be coming from the oplog
[18:28:05] <ghost1> Hi everyone, I was wondering if there is anyone that can answer a quick question for me. I am currently working on a project to replace an existing legacy system. The legacy system has well over 6+ TB of data how would we benefit from using something like mongo? Can mongo be the primary storage location for the data? What are some instances and which SQL would be better suited thanks.
[18:34:39] <ghost1> Crickets
[18:37:46] <ghost1> Ok, maybe a simpler questions … what scenarios is mongodb better suited for?
[18:54:43] <TommyCox> ghost1: Is the data tied together with relationships and foreign keys? If so, there would be a de-normilization process you'd have to go through that could get pretty daunting
[18:55:39] <ghost1> Hmm, yea there are a total of 136 table across 16 schemas
[19:22:38] <ghost1> Is there a limit to the amount of data that mongo db can handle???
[19:23:10] <mediocretes> yes, but it's not super practical
[19:23:34] <ghost1> mediocretes: meaning?
[19:24:19] <mediocretes> it's very very high
[19:24:49] <mediocretes> and higher if you use sharding
[19:25:02] <mediocretes> unless you're the NSA, the actual data storage limit is probably not a concern
[19:25:11] <ghost1> LOL, perfect!
[19:25:17] <ghost1> We are not NASA .. :-)
[19:32:22] <Hochmeister> how do you project an ObjectID value when using a group aggregator? I want each group to have it's own ObjectId.
[19:41:16] <konr_revmob> How can I query for, say, "animal" matching either "dog", "cat", "mus" etc? Is there a better way than {$or [{"animal":"dog"}, ...]}?
[19:42:51] <mediocretes> http://docs.mongodb.org/manual/reference/operator/in/#op._S_in ?
[19:43:09] <konr_revmob> thanks, mediocretes!
[19:47:26] <bmcgee> hey guys, I'm having difficulty figuring out how to put together an aggregation. I have a boolean field in my doc, I want to count the number of documents in which it is true. Any thoughts?
[19:49:38] <mediocretes> is there a reason you need it to be an aggregation?
[19:49:43] <harenson> bmcgee: db.foo.count({'my_bool': true})
[19:49:44] <bmcgee> ah methinks i want a projection
[19:49:51] <bmcgee> or that lol
[19:49:54] <mediocretes> just do a query
[19:49:59] <mediocretes> stop trying to bring a gun to a knife fight
[19:54:04] <searli> hi there
[19:54:22] <searli> I am having problems with creating user in mongodb
[19:54:25] <searli> am fairly new
[19:54:30] <searli> I get src/mongo/shell/db.js:64 password can't be empty
[19:54:32] <searli> throw "password can't be empty";
[19:54:51] <searli> anybody any idea what i am doing wrong
[19:55:07] <searli> I use the db.addUser command
[20:08:03] <Hochmeister> bmcgee:
[20:08:10] <Hochmeister> bmcgee: http://pastebin.com/F4W3faMm
[20:08:12] <diegows> is it possible to perform queries doing comparision between fields of the same document?
[20:08:28] <diegows> I want to get the documents where field1 < field2
[20:08:43] <bmcgee> Hochmeister: thx but i went with count in the end, seems like aggregate is overkill
[20:09:42] <Hochmeister> perhaps, unless you are making a bunch of queries independent of eachother.
[20:09:50] <jblack> Hi. I'm trying to figure out why "yard doc" doesn't seem to be using yard-mongoid when generating docs. Can anyone suggest a useful place for me to go to get help figuring out what's wrong?
[20:59:19] <Vishnevskiy> Hello, I am wondering why when I upgrade to PyMongo 2.5 I get connection leak (climbed to 3000 open in 30minutes) while if I downgrade backdown to 2.3 Its stable at 140. (using gevent)
[20:59:28] <Vishnevskiy> Any ideas would be greatly appreciated
[21:00:54] <yosyp> Is it preferable to have a lot of articles with the same schema, or one large article containing multiple sections with similar data? (think one large forum with subforums)?
[21:01:28] <konr_revmob> So I've got {"location": [{"code": "US"}, {"code": }]}
[21:01:33] <konr_revmob> oops
[21:02:59] <konr_revmob> So I've got items like {"location": [{"code": "US"}, {"code": "UK"}]}. How can I query for elements whose code (inside location) is in a list? will {"location.code": {"$in": ["UK", "BR", "DE"]}} work?
[21:03:17] <kali> Vishnevskiy: ha... better stay on the stable versions (2.2 and 2.4) for a start
[21:03:31] <Vishnevskiy> PyMongo, not MongoDB
[21:03:37] <Vishnevskiy> 2.5.2 is stable
[21:03:44] <kali> Vishnevskiy: ha, my apologies :)
[21:05:05] <konr_revmob> No, of course not, there is a vector between the fields. Is there any workaround?
[21:05:38] <kali> yosyp: you need to pick a model that will fit your query patterns.
[21:05:54] <kali> konr_revmob: i'm pretty sure it does work. have you tried it ?
[21:07:52] <konr_revmob> kali: oops, that was a typo, then :)
[21:19:13] <konr_revmob> And finally, dear friends, I do have a foreign key at {:product "bla", :owner_id "42"}, referencing {:_id 42, :species "capybara"}. Can I search for products whose owners are capybaras in a single expression?
[21:23:44] <harenson> konr_revmob: MongoDB is not relational
[21:24:22] <konr_revmob> harenson: just as I feared!
[21:24:47] <harenson> konr_revmob: xD
[21:41:29] <justinko> hello
[21:41:49] <justinko> is there anyway to NOT project (as in aggregate) a value if it is null?
[21:42:12] <justinko> or doesn't exist
[21:46:36] <Tech163> I'm learning to write my own driver for mongodb. after an OP_INSERT, how can I get the _id for the insert?
[21:48:33] <benhowdle89> Hello
[22:00:30] <Hochmeister> how I give a uniquie identifyer to a result of an aggregation pipeline? I want each group to have a unique identifier that I can pass to client JavaScript. http://pastebin.com/smC1FyJ0
[22:02:06] <Hochmeister> the problem is, in my real app (not the simplified example) I'm grouping by a url. Then in my JavaScript I'm building DOM elements using that _id value that comes back from the aggregation pipeline. However, I cannot decorate links and such with the url value (it also gets injected into the window.location.hash).