[00:00:39] <diamonds> so if you have 3 keys in your query, then you change one statement from $set to a straight document replacement
[00:01:05] <diamonds> you will all the sudden start inserting the 3 query key pairs as well
[00:04:09] <diamonds> I guess it's just not intuitive to me that keys/values would be pulled out of the query argument and used as data to insert
[00:08:00] <diamonds> I assume it's useful in practice
[01:33:19] <aboudreault> Is there a mongodb JS client ?
[01:34:19] <aboudreault> kk, there are some HTTP interface available.
[03:12:31] <arnpro> is it possible to apply a $limit to a $group expression?
[03:12:52] <arnpro> so it only limits, for example, the first 5 records for each user
[03:13:39] <ronar> Hi all! I have issue with mongo on OS X. I download mongo and there's a bin folder in it. But how to install this folder in a system, because it's very unhandy to navigate to ~/Download folder to exec mongod.
[03:26:11] <arnpro> not really Mike1382, but a quick google search gave me this: https://groups.google.com/d/msg/django-non-relational/qEt4xSjuwYA/REj98YTaDvQJ
[03:48:30] <Mike1382> I am bit confused with regards to Mongo Engine and Mongo non-real.. Are they separate things? Can I use Mongo Engine without a forked version of Django?
[03:50:01] <Mike1382> If they are separate things, then what are the benefits of using the forked non-relational version rather than mongo engine + classic version of Django
[06:28:38] <axilaris> hi... can i get some advise from u guys...im new to mongodb... how should i model 2 tables....should i use belongs_to or embedded_to
[09:04:13] <igotux> i have a small sharder cluster having a databae of few gbs .. am plannig to do mongodump on a mongos... :- http://docs.mongodb.org/manual/tutorial/backup-small-sharded-cluster-with-mongodump/
[09:04:27] <igotux> will it going to affect any write/read from another mongos ?
[09:27:51] <axilaris> just playing around mongodb.... im just curious... is it alright to have different tables with relation to it ? because in mongo, whatever table inserts will just go into the same db....
[09:36:46] <NodeX> I would say do what's best for you and your app/data
[09:38:01] <axilaris> ok... u dont have a strong opinion on how to go with... i guess what u r saying... if i want performance and scalability then nosql...otherwise... sql
[14:02:57] <bean|work> Deprecated. As of release 2.9.0, replaced by ReadPreference.secondaryPreferred(DBObject firstTagSet, DBObject... remainingTagSets)
[14:05:33] <arnpro> hey kali I understand, I have been using .aggregate() with a $group clause, just fine, however I would like to limit the group limit count, to n documents.
[14:05:58] <kali> arnpro: you just can't. there is no nice way to do this
[14:12:17] <arnpro> kali, does this helps http://pastebin.com/hPMw9E62 ? how can I re-write that into a mapreduce ?
[14:31:43] <bean|work> arnpro: you may want to give this a read: http://docs.mongodb.org/manual/applications/map-reduce/ and see if you can apply it to your current problem.
[14:37:45] <pquery> can anyone here help me with a sharding issue? I'm having trouble with an initial setup especially with connecting to the shards from the mongos intstance
[14:38:04] <pquery> I keep getting "couldn't connect to new shard socket exception [CONNECT_ERROR]"
[14:38:39] <pquery> I've seen the Jira issue on this but the flushRouterConfig : 1 isn't helping
[14:40:36] <Guest32592> hello, I want to find a document in an array in an array: inventory.squads[..].units[..]._id == mysearchid
[14:41:00] <Guest32592> I've used $all to search values in a single array level, but I need to search deeper now
[14:41:41] <Guest32592> it doesn't have to be fast, this is for debuggin only anyway
[14:45:54] <Guest32592> do you mean I should use dot notation, or why am I using it there?
[14:46:21] <Guest32592> If the second, then I'm using mongoose, but I'd like to know how to do this in MongoDB directly
[14:51:38] <arnpro> I think I started with the right foot, however the mapreduce is throwing an error: http://pastebin.com/e9NBgYR5 may someone tell me what I'm missing?
[15:26:11] <nb-ben> can gridfs work properly over WAN?
[15:32:29] <arnpro> hey bean|work I got this so far: http://pastebin.com/485Wcu7A however the reduce isn't working, it's storing everything. I need a way to debug those functions, is there an easy way for that?
[15:33:11] <bean|work> arnpro: I'm a sysadmin, not a developer, I was just simply pointing you to the documentatipn
[15:38:52] <MatheusOl> arnpro: you're calling emit with _id as the first argument
[15:39:04] <MatheusOl> arnpro: it is probably not what you wanted
[15:39:30] <arnpro> I see MatheusOl should I change it for something like id_str ?
[15:42:57] <MatheusOl> arnpro: Depends on what you want to group
[15:43:10] <MatheusOl> Think about this first argument as GROUP BY of SQL
[15:43:44] <arnpro> MatheusOl: I'd like to group all the docs belonging to and user defined by id_str with a limit of 10 per user, no more. so I'm trying to make a mapreduce function
[15:44:08] <arnpro> MatheusOl: I'm trying to emulate this: http://pastebin.com/hPMw9E62
[15:44:19] <NodeX> is using the aggregation framework not an option?
[15:44:46] <arnpro> yes it was until the limit rule was needed to apply :(
[15:45:15] <arnpro> from what I have read, there is no $limit for a $group method, so I need to emulate it
[15:47:45] <arnpro> Im sorry MatheusOl I am not following you, I tried with agrgegate() just like http://pastebin.com/hPMw9E62 shows and didn't let me put a limit in there
[15:48:38] <arnpro> could you please point me out how to have the limit apply to my needs?
[16:47:36] <smith__> while trying to run mongo I got the following error: terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid
[16:48:15] <smith__> I've tryed to re-gen locale by locale-gen en_US.UTF-8 Generating locales... en_US.UTF-8... done but the problem persists
[16:50:42] <MatheusOl> arnpro: if you want to store them, use the first version
[16:50:44] <arnpro> MatheusOl: I think it's done http://pastebin.com/Jfxk0b15 :D tyvm#
[16:52:10] <rideh> how do i change datatype for a set of data from into to boolean? I imported a series of data and of course it didn't know how to interpret it.
[16:52:34] <MatheusOl> arnpro: If your reduce function uses tweets as an array, it may receive an array too
[16:52:50] <MatheusOl> arnpro: In some cases, MongoDB may send a result of a reduce function to another reduce function
[16:53:09] <MatheusOl> The cnt var is not the problem
[16:53:25] <MatheusOl> The second parameter of emit and the return of reduce must have the same schema
[16:54:26] <arnpro> alright, I see what you mean, so just reverse the order of params, and good to go, right?
[16:54:27] <kali> the second argument of the emit calls, the return value of the reduce MUST have the same form, and the second arg of the reduce calls will be an array of these same things
[17:05:38] <owen1> is it possible for a client to access the nearest mongo host in a replicaset (read freference = near) in version 2.0.1?
[17:06:00] <owen1> and how to set secondaries to accept reads?
[17:06:02] <daslicht> is it possible to start mongodb with php ?
[17:12:18] <joec> I have a question on mongostat. Under the update column, is that reporting the number of update commands issues or the number of documents affected by an update during that time period?
[17:16:53] <arnpro> how to save an aggregate() result into a new collection?
[17:17:49] <mr_smith> couldn't you just: foo = db.bar.aggregate(…); db.foobar.save(foo)?
[17:30:56] <arnpro> I got an starter question :p, how can I extract all the tweets from documents like these: http://pastebin.com/4zau1Zu1 and store them all together?
[17:31:39] <arnpro> without the fancy nested stuff
[17:45:23] <br-> hi. what does this mean? inserting a trivial document: pymongo.errors.OperationFailure: _a != -1
[17:52:36] <rideh> how do i determine bson type of a field?
[19:04:10] <saml> where can i find mongodb schema?
[19:30:48] <arnpro> kali, The schema between the 1 tweet users and 1+ tweets users, is different, because the next step is to extract them all from there and put them into a collection
[19:31:35] <kali> arnpro: yeah, this is what i meant earlier: the emitted value must match the reduced result
[19:32:09] <arnpro> kali: I think it was worked out, tho? am I still missing something in the map/reduce functions ?
[19:33:31] <kali> arnpro: can you show me one document from the tweets collection ?
[19:33:52] <owen1> how to rename a key in a collection? this does nothing: db.devices_ypmobile.update( {}, { $rename: { 'udid': 'deviceId'} }, {multi: true} )
[19:34:53] <kali> owen1: it's not "{multi: true}", its "false, true" in the shell
[19:48:20] <owen1> kali: is there a way to access old versions of the docs? i use mongo 2.0.1 and don't even know how to make my secondaries readable and make my app access the 'nearst' mongo host.
[19:49:19] <kali> owen1: not that i know of. TBH, i'd be glad if we had a well working doc for the current version. this documentation migration is a PITA and it takes forever
[19:50:25] <kali> owen1: also, mongodb is still very green. you really want to bump version (at least to 2.0.8) and consider bumping to 2.2.2
[19:50:33] <arnpro> kali can I modify the _ids when using $unwind? because when trying to move the tweets into 1 single collection the user_ids get duplicated
[19:51:12] <arnpro> kali: I'm trying somehintg like this: http://pastebin.com/1DnJZPEq
[19:51:20] <kali> arnpro: no, but you can generate new ids after $unwinding
[20:23:21] <arnpro> is it possible to have db. available in a finalize function?
[20:23:23] <JoeyJoeJo> What's a good way to convert a pymongo cursor into json? Right now I just iterate over the cursor and pass a dict to json.dumps. Is there a faster way?
[20:27:34] <arnpro> why is the reduce skipped for an unique key document??
[20:27:56] <kali> because there is nothing to reduce
[20:28:24] <pquery> I'm a fan of mongo, but mongoimport seems a bit slow, is there a reason that 50K documents would take over 12 minutes to import?
[20:28:34] <arnpro> kali: how can I workaround it? Itd be pretty good to insert each tweet the reduce function finds, but what about the unique ones?
[20:29:07] <kali> arnpro: you dhould not access the data from the reduce
[20:29:10] <pquery> and it gives strange output -- 337572429216/13831177 2440663%
[20:29:31] <kali> arnpro: and don't forget that mongo can call it several times for the same key to reduce in stages
[20:31:00] <arnpro> kali so im pretty much lost again hehe, my idea was to extract the tweets nested array from the mapreduce result and store then in a collection. as I have searched, it returns the whole document where your keyword was found, not just a part of it
[20:33:11] <pquery> CPRIVMSG kail mongodb :not that big, like 2 lines a piece I didn't count the bytes because one of the data team made them but I opened up the json and looked and they're small
[20:38:45] <kali> saml: try with mongoexport, then
[20:38:54] <arnpro> kali: you mentioned something about generating _ids when using $unwind, how can I achieve that?
[20:39:27] <kali> arnpro: you $unwind, and at the next stage you use project to generate new ids
[20:43:26] <arnpro> kali: like this? http://pastebin.com/D7ynAHxQ
[20:55:38] <arnpro> is there an operator for $project-ing all fields xD ?
[20:56:01] <arnpro> instead of going field1:1,field2:1... fieldn:1 etc
[21:58:30] <dan_> Hi! When I'm am in console in one of my mongodb shards, running db.stats() against all non-empty db's reports 26 indexes. But the MMS index count graph (DB Stats->Indexes) for the same shard reports 42 indexes. is there any way to reconcile the difference between these values? Per the deployed code, I expect 26 indexes.
[22:04:47] <dan_> maybe MMS is intrumenting mongo-internal system indexes that wouldn't be inventoried by db.stats() ?