PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Friday the 1st of February, 2013

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:00:39] <diamonds> so if you have 3 keys in your query, then you change one statement from $set to a straight document replacement
[00:01:05] <diamonds> you will all the sudden start inserting the 3 query key pairs as well
[00:04:09] <diamonds> I guess it's just not intuitive to me that keys/values would be pulled out of the query argument and used as data to insert
[00:08:00] <diamonds> I assume it's useful in practice
[01:33:19] <aboudreault> Is there a mongodb JS client ?
[01:34:19] <aboudreault> kk, there are some HTTP interface available.
[03:12:31] <arnpro> is it possible to apply a $limit to a $group expression?
[03:12:52] <arnpro> so it only limits, for example, the first 5 records for each user
[03:13:39] <ronar> Hi all! I have issue with mongo on OS X. I download mongo and there's a bin folder in it. But how to install this folder in a system, because it's very unhandy to navigate to ~/Download folder to exec mongod.
[03:19:14] <Mike1382> ...
[03:19:58] <Mike1382> Hi, I'm looking at Django and MongoDB framework options and can't seem to open the link related to Django MongoDB Engine
[03:20:21] <Mike1382> Anyone know the story as to why this URL is not working?
[03:20:22] <Mike1382> http://django-mongodb.org/
[03:22:58] <Mike1382> Anyone here?
[03:26:11] <arnpro> not really Mike1382, but a quick google search gave me this: https://groups.google.com/d/msg/django-non-relational/qEt4xSjuwYA/REj98YTaDvQJ
[03:26:22] <Mike1382> Yeah, just read that
[03:26:23] <arnpro> you can clone the docs and read all you want locally :D
[03:26:41] <Mike1382> k, doing that now
[03:27:03] <Mike1382> Thanks
[03:48:30] <Mike1382> I am bit confused with regards to Mongo Engine and Mongo non-real.. Are they separate things? Can I use Mongo Engine without a forked version of Django?
[03:48:44] <Mike1382> Mongo Non-relational
[03:50:01] <Mike1382> If they are separate things, then what are the benefits of using the forked non-relational version rather than mongo engine + classic version of Django
[06:28:38] <axilaris> hi... can i get some advise from u guys...im new to mongodb... how should i model 2 tables....should i use belongs_to or embedded_to
[08:37:10] <[AD]Turbo> hi there
[09:04:13] <igotux> i have a small sharder cluster having a databae of few gbs .. am plannig to do mongodump on a mongos... :- http://docs.mongodb.org/manual/tutorial/backup-small-sharded-cluster-with-mongodump/
[09:04:27] <igotux> will it going to affect any write/read from another mongos ?
[09:27:51] <axilaris> just playing around mongodb.... im just curious... is it alright to have different tables with relation to it ? because in mongo, whatever table inserts will just go into the same db....
[09:28:05] <axilaris> is it ok with you guys ?
[09:28:14] <axilaris> or do u prefer sql db for that ?
[09:28:40] <axilaris> need some nosql/db designer recommendation ... anyone ?
[09:29:07] <NodeX> avoid relations
[09:30:53] <axilaris> really
[09:30:59] <axilaris> so if there is relationshop
[09:31:03] <axilaris> then forget about mongo ?
[09:31:09] <axilaris> forget about nosql ?
[09:31:16] <NodeX> I didn't say
[09:31:17] <NodeX> that
[09:31:32] <NodeX> not every
[09:31:34] <NodeX> data
[09:31:36] <NodeX> model
[09:31:40] <NodeX> uses relations
[09:31:40] <axilaris> say u have a simple case... author->articles->comments
[09:31:56] <axilaris> all datastructures are related
[09:32:01] <NodeX> falser
[09:32:03] <NodeX> -r
[09:32:12] <axilaris> so yeah there are relations added into nosql
[09:32:30] <axilaris> to me (imean im new to this) it looks a bit difficult to manage
[09:33:12] <axilaris> everything is just inserted into the same db... u would have to query the relationship to aggregate the data
[09:33:22] <axilaris> is that something normally practiced by you guys ?
[09:33:23] <NodeX> management goes out of the window for scalability and performance
[09:33:35] <NodeX> I think you're confusing what a "db" is
[09:33:38] <axilaris> yes
[09:33:48] <NodeX> a db is the same as an SQL db
[09:33:53] <NodeX> a collection == table
[09:33:53] <axilaris> for speed and scalability i see the point
[09:33:55] <NodeX> a document == row
[09:34:02] <axilaris> but i guess there is a bit of management in it
[09:34:16] <axilaris> and i would like to know the best practices in it
[09:34:29] <NodeX> define "best practices"
[09:34:37] <NodeX> your data and needs are different to everyone else
[09:34:51] <axilaris> ok lets take an example of author->articles->comments for a newssite
[09:34:57] <axilaris> you want to scale
[09:35:01] <axilaris> at the same time
[09:35:06] <axilaris> you would want to manage as well
[09:35:24] <axilaris> how would u design it in your opinion
[09:35:27] <NodeX> I don't see the management problem
[09:36:16] <axilaris> true also... u can always query to get the info... a bit strange a bit dealing with a flat table
[09:36:33] <axilaris> so u would say go for it
[09:36:35] <axilaris> right :)
[09:36:46] <NodeX> I would say do what's best for you and your app/data
[09:38:01] <axilaris> ok... u dont have a strong opinion on how to go with... i guess what u r saying... if i want performance and scalability then nosql...otherwise... sql
[09:38:26] <axilaris> thanks nodex...
[09:38:53] <NodeX> I didn't say that
[09:39:11] <axilaris> ok what is your direction (recommendation )
[09:39:16] <NodeX> I said do what's right for YOU
[09:39:32] <axilaris> how about for u?
[09:39:44] <NodeX> what's right for me is different than for you
[09:39:49] <axilaris> doesnt matter
[09:39:58] <axilaris> just want to know another persons opinion
[09:40:03] <NodeX> only YOU can make decisions about YOUR data because YOU know YOUR data better than me
[09:40:30] <axilaris> okay... im just putting a case forward
[09:40:33] <NodeX> personaly I hate relations, they're not efficient so I don't use relational DB's at all
[09:40:55] <NodeX> performance and scalability is number 1 priority for me
[09:41:01] <axilaris> in your experience... do u at all have relations in most of your deployment ?
[09:41:09] <axilaris> and how many ?
[09:41:17] <axilaris> or usually none ?
[09:41:24] <NodeX> mostly none
[09:41:27] <axilaris> wow
[09:41:32] <axilaris> easier to see i guess
[09:41:34] <axilaris> ok
[09:43:58] <axilaris> if u flatten them... data would be larger.... how about the performance ?
[09:44:14] <NodeX> I have lots of RAM ;)
[09:44:16] <axilaris> i guess its affected a bit
[09:44:28] <axilaris> ah...so thats the way to go huh
[09:45:16] <axilaris> thanks alot nodeX for your insights
[09:45:34] <NodeX> it's my way to go
[09:45:47] <NodeX> because performance is priority #1
[09:45:55] <NodeX> (for me)
[09:45:56] <axilaris> well... good to know... its always lessons learned
[09:46:20] <NodeX> I have the luxury of making the decisions for my apps/clients so I can go that route
[09:46:45] <NodeX> some people are bound by ODM's and teams of people and cannot work like I do
[11:42:04] <NodeX> anyone know the ISO format that Mongo Date uses?
[12:32:10] <Killerguy> hi guys
[12:32:19] <Killerguy> can someone help me about this stacktrace?
[12:32:20] <Killerguy> http://pastebin.com/PHHjzmZU
[12:33:31] <Killerguy> db version v2.2.1, pdfile version 4.5
[12:33:31] <Killerguy> Fri Feb 1 13:32:04 git version: nogitversion
[12:56:17] <jtomasrl> does mongo ISODate save timezone?
[12:59:53] <ron> try it and see!
[13:00:11] <NodeX> it can save a "Z" which I assume stands for Zulu (UTC)
[13:37:42] <arnpro> how can I write a map reduce that groups a set of records by user, but only the first n documents of each user?
[13:53:22] <jefersonleao> hey everybody, can anyone give me a little help about read preferences with java driver??
[13:54:41] <jefersonleao> i run into a problem in that i cannot specify it when perform a simple "find" operation
[13:55:08] <arnpro> is there a more active support channel for mongo?
[13:59:12] <jefersonleao> idk but i think you probably can mapreduce your set with a simple js function
[13:59:18] <kali> arnpro: this channel is usually active, but your question has no satisfactory answer
[13:59:24] <kali> arnpro: you can't.
[13:59:38] <kali> as for jefersonleao, we're still waiting for a question :)
[14:00:09] <jefersonleao> i need to perform a find operation, using the java driver
[14:00:19] <jefersonleao> but cannot set a read preference to that query
[14:00:44] <kali> what do you mean "cannot" ?
[14:01:21] <kali> you don't how to use the api ? you get an error ? you get no error but the preference is not honoured ?
[14:01:50] <jefersonleao> i think i'm just running in problems to specify the prefernces correctly
[14:02:07] <jefersonleao> i tried ReadPreferences.withTags, but its deprecated
[14:02:42] <kali> jefersonleao: what's the version of the driver you're using ?
[14:02:57] <jefersonleao> 2.10.1
[14:02:57] <bean|work> Deprecated. As of release 2.9.0, replaced by ReadPreference.secondaryPreferred(DBObject firstTagSet, DBObject... remainingTagSets)
[14:03:49] <jefersonleao> god, thanks :)
[14:04:45] <bean|work> sometimes it helps to read the API
[14:04:55] <jefersonleao> i think it does, my bad
[14:05:15] <jefersonleao> i just did not attempted to the version
[14:05:18] <jefersonleao> thanks anyway
[14:05:33] <arnpro> hey kali I understand, I have been using .aggregate() with a $group clause, just fine, however I would like to limit the group limit count, to n documents.
[14:05:58] <kali> arnpro: you just can't. there is no nice way to do this
[14:12:17] <arnpro> kali, does this helps http://pastebin.com/hPMw9E62 ? how can I re-write that into a mapreduce ?
[14:31:43] <bean|work> arnpro: you may want to give this a read: http://docs.mongodb.org/manual/applications/map-reduce/ and see if you can apply it to your current problem.
[14:37:45] <pquery> can anyone here help me with a sharding issue? I'm having trouble with an initial setup especially with connecting to the shards from the mongos intstance
[14:38:04] <pquery> I keep getting "couldn't connect to new shard socket exception [CONNECT_ERROR]"
[14:38:39] <pquery> I've seen the Jira issue on this but the flushRouterConfig : 1 isn't helping
[14:40:36] <Guest32592> hello, I want to find a document in an array in an array: inventory.squads[..].units[..]._id == mysearchid
[14:41:00] <Guest32592> I've used $all to search values in a single array level, but I need to search deeper now
[14:41:41] <Guest32592> it doesn't have to be fast, this is for debuggin only anyway
[14:43:53] <NodeX> dot notation?
[14:45:54] <Guest32592> do you mean I should use dot notation, or why am I using it there?
[14:46:21] <Guest32592> If the second, then I'm using mongoose, but I'd like to know how to do this in MongoDB directly
[14:51:38] <arnpro> I think I started with the right foot, however the mapreduce is throwing an error: http://pastebin.com/e9NBgYR5 may someone tell me what I'm missing?
[14:56:36] <HankHendrix> Hello thre
[14:56:45] <Guest32592> hmm, thanks NodeX I think. Turns out simply searching for { 'squads.units._id':mysearchid} works
[14:58:15] <pquery> nobody has sharding experience in here?
[14:58:22] <pquery> on 2.2?
[14:59:33] <bean|work> pquery: I set up mongos in front of a replicaset just fine last week. but i don't have actual shards, just a single replicaset
[15:00:16] <pquery> I have the replicaset done as well
[15:00:45] <pquery> I'm running into the issue of the mongos not being able to connect to the shards
[15:00:55] <pquery> I keep getting "couldn't connect to new shard socket exception [CONNECT_ERROR]"
[15:01:34] <pquery> I've gotten past all the other issues and can get into each of the replica instances from the mongoS instance
[15:01:45] <bean|work> yes, i saw your previous message. I don't think I can help. How are you setting up the shard
[15:01:57] <pquery> I just checked the tcp timeout on them all and it's 7200
[15:02:20] <pquery> I have 3 node replica set in EC2
[15:02:44] <pquery> then I have 1 instance that is both the configserver and the mongos
[15:03:18] <pquery> when I go to sh.addShard I'm running into the socket issue
[15:04:15] <pquery> all instances are running on 27017 except the config server which is running on 27019
[15:04:23] <pquery> and everything is forked
[15:05:06] <bean|work> are you mixing hostnames and IPs?
[15:05:19] <pquery> I've tried restarting the mongoS instance a few times, and I've tried running the db.adminCommand( { flushRouterConfig : 1 } )
[15:05:39] <pquery> no, I've set everything up in etc/hosts on all the instances
[15:06:02] <bean|work> ok
[15:06:08] <pquery> and they're all using internal ip address for AWS
[15:06:34] <bean|work> I can't say I've ever done in on EC2 :(
[15:06:45] <pquery> it's a whole different type of beast, lol
[15:07:03] <pquery> dynamic ips / dns
[15:07:28] <pquery> but I haven't restarted anything yet so everything is staying the same
[15:07:56] <bean|work> and on your replicaset rs.status() looks right?
[15:08:34] <pquery> yes
[15:08:49] <pquery> I think, should I see any shard info in there?
[15:09:02] <bean|work> i dont think so
[15:09:34] <bean|work> just a primary, and secondaries
[15:09:49] <bean|work> and all mongo bits are 2.2?
[15:11:15] <pquery> yes, I set all this up yesterday using the latest repo from 10gen
[15:11:25] <pquery> I just put the status in a pastie http://pastie.org/6012327
[15:13:10] <bean|work> ok, and then the command you're running for creating the shard is sh.addShard("rs_cent_shard/shardA:27017") ?
[15:13:25] <bean|work> or whatever the command is, sorry only one cup of coffee in
[15:15:44] <pquery> no
[15:16:07] <pquery> I was running sh.addShard("shardA/shardA:27017")
[15:16:19] <pquery> I will try yours though
[15:17:01] <pquery> that worked, I though the first part was just the name you were giving it
[15:17:13] <pquery> not the rs name
[15:17:15] <bean|work> you already named your replica set
[15:17:25] <bean|work> yep, it needs the rs name
[15:17:35] <pquery> this is where I was stuck and it's not documented all that well
[15:17:39] <pquery> thanks bean
[15:18:03] <bean|work> no prob
[15:18:13] <bean|work> I'm just a sysadmin in here to help people out when I can ha
[15:22:01] <JoeyJoeJo> When I add .explain() to my find() queries, does "millis":4 mean it took 4 milliseconds to complete the query?
[15:22:23] <pquery> so I added 1 of the replica set, and then I tried to add another instance of the replica set and that failed...
[15:22:43] <pquery> does that mean that it has added all of the nodes of the replica set to the shard?
[15:22:43] <NodeX> JoeyJoeJo : yes
[15:23:33] <pquery> nm, I see in sh.status the shards in "host"
[15:23:48] <JoeyJoeJo> nodex: thakns
[15:23:49] <JoeyJoeJo> thanks
[15:26:11] <nb-ben> can gridfs work properly over WAN?
[15:32:29] <arnpro> hey bean|work I got this so far: http://pastebin.com/485Wcu7A however the reduce isn't working, it's storing everything. I need a way to debug those functions, is there an easy way for that?
[15:33:11] <bean|work> arnpro: I'm a sysadmin, not a developer, I was just simply pointing you to the documentatipn
[15:38:52] <MatheusOl> arnpro: you're calling emit with _id as the first argument
[15:39:04] <MatheusOl> arnpro: it is probably not what you wanted
[15:39:30] <arnpro> I see MatheusOl should I change it for something like id_str ?
[15:42:57] <MatheusOl> arnpro: Depends on what you want to group
[15:43:10] <MatheusOl> Think about this first argument as GROUP BY of SQL
[15:43:44] <arnpro> MatheusOl: I'd like to group all the docs belonging to and user defined by id_str with a limit of 10 per user, no more. so I'm trying to make a mapreduce function
[15:44:08] <arnpro> MatheusOl: I'm trying to emulate this: http://pastebin.com/hPMw9E62
[15:44:19] <NodeX> is using the aggregation framework not an option?
[15:44:46] <arnpro> yes it was until the limit rule was needed to apply :(
[15:45:15] <arnpro> from what I have read, there is no $limit for a $group method, so I need to emulate it
[15:45:56] <MatheusOl> sure there is
[15:46:00] <MatheusOl> use the aggregation function
[15:46:02] <MatheusOl> not group
[15:47:45] <arnpro> Im sorry MatheusOl I am not following you, I tried with agrgegate() just like http://pastebin.com/hPMw9E62 shows and didn't let me put a limit in there
[15:48:38] <arnpro> could you please point me out how to have the limit apply to my needs?
[15:49:33] <MatheusOl> let me see
[15:52:49] <MatheusOl> arnpro: what you want to do exactly?
[15:53:44] <arnpro> I'm trying to count the numbers of documents an user has. but no more than 10
[15:54:08] <arnpro> in this case, the number of tweets each user has but only show the first 10
[15:54:29] <MatheusOl> you want to count them, or to show the first 10?
[15:54:37] <MatheusOl> in a per user basis, right?
[15:54:41] <arnpro> yes
[15:55:10] <arnpro> show the first 10
[16:07:45] <strnadj> limit
[16:08:40] <strnadj> in aggregatino framework: { $limit : 10 }
[16:08:44] <strnadj> *aggregation
[16:09:17] <strnadj> in shell : db.collection.find().limit(10)
[16:11:24] <MatheusOl> humm...
[16:11:31] <MatheusOl> strnadj: it's not that easy
[16:11:47] <MatheusOl> arnpro wants 10 for "each" user, right?
[16:11:52] <kali> this limits the whole collections, not each group
[16:12:00] <kali> there is no way to do that easily
[16:12:12] <MatheusOl> I don't think it can be easily achieved
[16:12:41] <arnpro> yes MatheusOl 10 only
[16:13:11] <MatheusOl> Is that a huge collection?
[16:13:14] <MatheusOl> ops
[16:13:17] <MatheusOl> Huge document?
[16:13:52] <MatheusOl> I think you will have to do map/reduce and use finalize
[16:14:02] <arnpro> yepp
[16:14:13] <arnpro> I am on it right now, I am just not having the desired result
[16:14:43] <MatheusOl> But with that you can't order the results
[16:15:07] <arnpro> MatheusOl: http://pastebin.com/mJaKrqYu that's what I got so far
[16:15:19] <MatheusOl> well, you can but you will filter them all anyway
[16:15:43] <arnpro> I thought about having a master array that counts from 0 to 10, if the count of user is 10 ignore it, something like that
[16:16:14] <MatheusOl> arnpro: Do you care about the order?
[16:16:53] <arnpro> I shouldnt ask for too much at this level, but I don't
[16:17:09] <arnpro> I can order it or find the way later
[16:17:11] <MatheusOl> you have to insert user_items_count on the scope
[16:17:26] <MatheusOl> oh! You did
[16:17:27] <MatheusOl> sorry
[16:17:28] <arnpro> It is inserted, hehehe it not well documented... had me thinking but found it
[16:18:01] <arnpro> so that's the first step done, what would be the 2nd? "ignore the document if the count is over 10"
[16:19:42] <arnpro> oh god, that code is hanging the server
[16:20:03] <IAD> -
[16:20:18] <MatheusOl> you does need some code in your reduce function
[16:21:08] <arnpro> why isn't it that easy like return (count below 10) ? reduced : null; :(
[16:22:08] <arnpro> MatheusOl: what code should I have in the reduce function?
[16:24:32] <MatheusOl> Got it
[16:24:37] <MatheusOl> wait a sec
[16:24:51] <MatheusOl> id_str is the group, right?
[16:24:54] <arnpro> right
[16:25:19] <MatheusOl> and what is the field that contains the value you want?
[16:26:17] <arnpro> there is no field, just count the docs the user has, if more than limit drop all the next ones
[16:26:31] <arnpro> I will pastie a sample collection
[16:27:35] <MatheusOl> what?
[16:27:41] <MatheusOl> means no sense to me
[16:27:59] <MatheusOl> I've got go to a meeting, but try to adapt the idea:
[16:28:13] <MatheusOl> http://pastebin.com/hTWN81yq
[16:28:21] <MatheusOl> It get then tweets by user
[16:28:28] <MatheusOl> or at most 10
[16:28:32] <arnpro> alright!
[16:28:35] <arnpro> thanks alot MatheusOl
[16:28:37] <arnpro> gl
[16:43:28] <MatheusOl> arnpro: http://pastebin.com/kxiMt0K9
[16:45:37] <arnpro> worked MatheusOl !
[16:45:49] <arnpro> that's just count, tho I need to store the first 10 tweets now ;P
[16:45:58] <arnpro> I was working with your previous version, will try with new 1
[16:46:13] <arnpro> seems much cleaner
[16:47:36] <smith__> while trying to run mongo I got the following error: terminate called after throwing an instance of 'std::runtime_error' what(): locale::facet::_S_create_c_locale name not valid
[16:48:15] <smith__> I've tryed to re-gen locale by locale-gen en_US.UTF-8 Generating locales... en_US.UTF-8... done but the problem persists
[16:50:42] <MatheusOl> arnpro: if you want to store them, use the first version
[16:50:44] <arnpro> MatheusOl: I think it's done http://pastebin.com/Jfxk0b15 :D tyvm#
[16:50:56] <arnpro> big time
[16:52:04] <MatheusOl> arnpro: It may break
[16:52:10] <rideh> how do i change datatype for a set of data from into to boolean? I imported a series of data and of course it didn't know how to interpret it.
[16:52:34] <MatheusOl> arnpro: If your reduce function uses tweets as an array, it may receive an array too
[16:52:50] <MatheusOl> arnpro: In some cases, MongoDB may send a result of a reduce function to another reduce function
[16:52:57] <arnpro> So I will keep the cnt var ?
[16:53:09] <MatheusOl> The cnt var is not the problem
[16:53:25] <MatheusOl> The second parameter of emit and the return of reduce must have the same schema
[16:54:26] <arnpro> alright, I see what you mean, so just reverse the order of params, and good to go, right?
[16:54:27] <kali> the second argument of the emit calls, the return value of the reduce MUST have the same form, and the second arg of the reduce calls will be an array of these same things
[16:55:30] <MatheusOl> no
[16:55:39] <MatheusOl> like this: http://pastebin.com/Em46jF0d
[17:00:00] <arnpro> MatheusOl: I see what you mean, however the new code is only storing 10 fields of each tweet :P
[17:01:30] <arnpro> only happening the cnt gets to 10 MatheusOl
[17:02:27] <MatheusOl> I'm confused
[17:02:33] <MatheusOl> Wasn't that what you wanted?
[17:02:46] <arnpro> MatheusOl: fixed without the values[i].tweets[j] just added values[i]
[17:02:51] <arnpro> sorry, values[i].tweets
[17:02:56] <arnpro> excellent :) working now
[17:05:38] <owen1> is it possible for a client to access the nearest mongo host in a replicaset (read freference = near) in version 2.0.1?
[17:06:00] <owen1> and how to set secondaries to accept reads?
[17:06:02] <daslicht> is it possible to start mongodb with php ?
[17:12:18] <joec> I have a question on mongostat. Under the update column, is that reporting the number of update commands issues or the number of documents affected by an update during that time period?
[17:16:53] <arnpro> how to save an aggregate() result into a new collection?
[17:17:49] <mr_smith> couldn't you just: foo = db.bar.aggregate(…); db.foobar.save(foo)?
[17:30:56] <arnpro> I got an starter question :p, how can I extract all the tweets from documents like these: http://pastebin.com/4zau1Zu1 and store them all together?
[17:31:39] <arnpro> without the fancy nested stuff
[17:45:23] <br-> hi. what does this mean? inserting a trivial document: pymongo.errors.OperationFailure: _a != -1
[17:52:36] <rideh> how do i determine bson type of a field?
[19:04:10] <saml> where can i find mongodb schema?
[19:05:02] <kali> errrr
[19:05:05] <kali> what do you mean
[19:05:22] <saml> never mind.. i need to read up on mongodb
[19:05:40] <saml> i need to write a script that imports some data to mongodb.. and i was not sure about structure
[19:06:04] <saml> do you use some gui client to mongodb?
[19:07:07] <kali> i don't... but some are: http://mongodb.onconfluence.com/display/DOCS/Admin+UIs
[19:09:51] <arnpro> hey MatheusOl, are you still around?
[19:12:25] <saml> oh i connected to mongodb
[19:13:17] <saml> version() returns client version or server version?
[19:13:23] <kali> client
[19:14:13] <saml> how can I find server version?
[19:14:30] <kali> look at db.help(), the right command should be listed somewhere
[19:15:14] <saml> ah thanks
[19:15:45] <arnpro> kali, how can I handle a document that does not necesarily fits into the reduce function of a mapReduce procedure?
[19:21:56] <kali> arnpro: i don't understand your question
[19:23:53] <saml> prompt = function() { return db.hostInfo().system.hostname + '> '; };
[19:23:55] <saml> nice
[19:24:50] <saml> actually not nice since all servers have hostname set to mongodb01
[19:27:06] <arnpro> kali http://pastebin.com/7rcQ0RWW can you please take a look ?
[19:29:54] <saml> how can I get argv (arguments passed to mongo shell) in .mongorc.js ?
[19:30:06] <kali> arnpro: so what's the problem ?
[19:30:48] <arnpro> kali, The schema between the 1 tweet users and 1+ tweets users, is different, because the next step is to extract them all from there and put them into a collection
[19:31:35] <kali> arnpro: yeah, this is what i meant earlier: the emitted value must match the reduced result
[19:32:09] <arnpro> kali: I think it was worked out, tho? am I still missing something in the map/reduce functions ?
[19:33:31] <kali> arnpro: can you show me one document from the tweets collection ?
[19:33:39] <arnpro> sure will do
[19:33:52] <owen1> how to rename a key in a collection? this does nothing: db.devices_ypmobile.update( {}, { $rename: { 'udid': 'deviceId'} }, {multi: true} )
[19:34:53] <kali> owen1: it's not "{multi: true}", its "false, true" in the shell
[19:35:18] <owen1> kali: oh. thank
[19:37:13] <arnpro> kali: sample collection here http://pastebin.com/dSZ2UbeF
[19:37:44] <kali> arnpro: good. now, what do you expect the output to look like ?
[19:39:20] <MatheusOl> arnpro: the map function should emit an array
[19:39:30] <MatheusOl> arnpro: and them on reduce, you can treat it as one
[19:39:45] <MatheusOl> arnpro: so you will always have the 1+ model, even if it is only one
[19:40:20] <MatheusOl> notice that on emit you are sending ONE tweet, and on reduce you are returning one ARRAY
[19:40:28] <MatheusOl> that is completely wrong
[19:40:35] <arnpro> here's the expected out, http://pastebin.com/kzF1Et2C, so even 1 tweet users, should be shown as array
[19:40:55] <owen1> kali: where does it say what the true and false means? http://docs.mongodb.org/manual/applications/update/
[19:41:41] <arnpro> MatheusOl: how should I be sending that param?
[19:42:03] <kali> arnpro: so, as MatheusOl says: your must emit must an array in the map() function
[19:42:21] <MatheusOl> http://pastebin.com/Ga2Sh8wv
[19:42:31] <kali> whatever you emit must match the expected output
[19:42:36] <MatheusOl> Check the changes at line 4 and 12
[19:42:49] <MatheusOl> I think just that is enough
[19:43:24] <MatheusOl> arnpro: I've got go now, maybe I'll come back on monday
[19:43:27] <MatheusOl> good luck
[19:43:34] <arnpro> MatheusOl: it was it
[19:43:41] <arnpro> I see it now, I do not know how I missed that
[19:43:53] <arnpro> thanks kali & MatheusOl, sorry for dragging it too long
[19:44:04] <MatheusOl> never mind
[19:44:07] <arnpro> I see what you both meant
[19:44:16] <MatheusOl> have a nice weekend everybody
[19:44:21] <MatheusOl> arnpro: I'm happy you got it
[19:44:24] <arnpro> you too
[19:45:18] <kali> owen1: the { multi: true } is quite new... what version is your client ?
[19:45:51] <kali> arnpro: good. now you just need to write the right reduce function :)
[19:46:33] <owen1> kali: MongoDB shell version: 2.0.1
[19:46:34] <arnpro> hehe, actually I need to join them all into 1 collection
[19:46:53] <kali> owen1: bingo. you need to use the old syntax
[19:47:09] <kali> owen1: i did not even know about the new syntax
[19:47:10] <owen1> kali: where do i find the documentation for the old syntax?
[19:48:04] <kali> owen1: http://docs.mongodb.org/manual/reference/method/db.collection.update/#db.collection.update
[19:48:20] <owen1> kali: is there a way to access old versions of the docs? i use mongo 2.0.1 and don't even know how to make my secondaries readable and make my app access the 'nearst' mongo host.
[19:49:19] <kali> owen1: not that i know of. TBH, i'd be glad if we had a well working doc for the current version. this documentation migration is a PITA and it takes forever
[19:50:25] <kali> owen1: also, mongodb is still very green. you really want to bump version (at least to 2.0.8) and consider bumping to 2.2.2
[19:50:33] <arnpro> kali can I modify the _ids when using $unwind? because when trying to move the tweets into 1 single collection the user_ids get duplicated
[19:51:12] <arnpro> kali: I'm trying somehintg like this: http://pastebin.com/1DnJZPEq
[19:51:20] <kali> arnpro: no, but you can generate new ids after $unwinding
[19:51:51] <owen1> kali: got it. thanks a lot
[19:56:15] <saml> json document is pretty large to look at in mongo shell. how do you look at json? can mongo shell fold some items.. etc?
[19:57:50] <saml> can mongo shell pipe to less if output doesn't fit in screenful?
[20:14:27] <pquery> can you set slaveOk from the shell on a secondary?
[20:14:50] <pquery> with db.SlaveOk, it doesn't seem to be taking for me
[20:17:19] <joe_p> pquery: rs.slaveOk()
[20:23:21] <arnpro> is it possible to have db. available in a finalize function?
[20:23:23] <JoeyJoeJo> What's a good way to convert a pymongo cursor into json? Right now I just iterate over the cursor and pass a dict to json.dumps. Is there a faster way?
[20:27:34] <arnpro> why is the reduce skipped for an unique key document??
[20:27:56] <kali> because there is nothing to reduce
[20:28:24] <pquery> I'm a fan of mongo, but mongoimport seems a bit slow, is there a reason that 50K documents would take over 12 minutes to import?
[20:28:34] <arnpro> kali: how can I workaround it? Itd be pretty good to insert each tweet the reduce function finds, but what about the unique ones?
[20:29:07] <kali> arnpro: you dhould not access the data from the reduce
[20:29:10] <pquery> and it gives strange output -- 337572429216/13831177 2440663%
[20:29:31] <kali> arnpro: and don't forget that mongo can call it several times for the same key to reduce in stages
[20:29:48] <kali> pquery: how big are your docs ?
[20:31:00] <arnpro> kali so im pretty much lost again hehe, my idea was to extract the tweets nested array from the mapreduce result and store then in a collection. as I have searched, it returns the whole document where your keyword was found, not just a part of it
[20:33:11] <pquery> CPRIVMSG kail mongodb :not that big, like 2 lines a piece I didn't count the bytes because one of the data team made them but I opened up the json and looked and they're small
[20:33:22] <saml> hey, how can i get json?
[20:33:28] <arnpro> kali: is it valid for me to move the reduce code into finalize?
[20:33:56] <saml> db.foo.findOne({"x": 1}) returns somethign like {"id": __ObjectId( ...)} which isn't json
[20:34:08] <kali> arnpro: well, you still need t oreduce your results
[20:34:19] <kali> arnpro: i'm not sure i understand what you're adter
[20:34:46] <arnpro> kali: I need to get the reduced results and the not-reduced ones, and store them apart
[20:34:53] <kali> saml: mongoexport will produce json if you ask nicely
[20:35:27] <saml> how can I view large json? mongo shell isn't good
[20:35:48] <kali> saml: printjson may help
[20:37:52] <saml> mongo listings --eval 'printjson(db.Listing.findOne({"url": "ca-va"}))' > some.json
[20:38:02] <saml> this ain't good because printjson prints
[20:38:09] <saml> "id": Object...
[20:38:45] <kali> saml: try with mongoexport, then
[20:38:54] <arnpro> kali: you mentioned something about generating _ids when using $unwind, how can I achieve that?
[20:39:27] <kali> arnpro: you $unwind, and at the next stage you use project to generate new ids
[20:43:26] <arnpro> kali: like this? http://pastebin.com/D7ynAHxQ
[20:55:38] <arnpro> is there an operator for $project-ing all fields xD ?
[20:56:01] <arnpro> instead of going field1:1,field2:1... fieldn:1 etc
[21:58:30] <dan_> Hi! When I'm am in console in one of my mongodb shards, running db.stats() against all non-empty db's reports 26 indexes. But the MMS index count graph (DB Stats->Indexes) for the same shard reports 42 indexes. is there any way to reconcile the difference between these values? Per the deployed code, I expect 26 indexes.
[22:04:47] <dan_> maybe MMS is intrumenting mongo-internal system indexes that wouldn't be inventoried by db.stats() ?